Search results for: Koha Open Source Software
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11578

Search results for: Koha Open Source Software

8848 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology

Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey

Abstract:

In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.

Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography

Procedia PDF Downloads 92
8847 Preparation and Sealing of Polymer Microchannels Using EB Lithography and Laser Welding

Authors: Ian Jones, Jonathan Griffiths

Abstract:

Laser welding offers the potential for making very precise joints in plastics products, both in terms of the joint location and the amount of heating applied. These methods have allowed the production of complex products such as microfluidic devices where channels and structure resolution below 100 µm is regularly used. However, to date, the dimension of welds made using lasers has been limited by the focus spot size that is achievable from the laser source. Theoretically, the minimum spot size possible from a laser is comparable to the wavelength of the radiation emitted. Practically, with reasonable focal length optics the spot size achievable is a few factors larger than this, and the melt zone in a plastics weld is larger again than this. The narrowest welds feasible to date have therefore been 10-20 µm wide using a near-infrared laser source. The aim of this work was to prepare laser absorber tracks and channels less than 10 µm wide in PMMA thermoplastic using EB lithography followed by sealing of channels using laser welding to carry out welds with widths of the order of 1 µm, below the resolution limit of the near-infrared laser used. Welded joints with a width of 1 µm have been achieved as well as channels with a width of 5 µm. The procedure was based on the principle of transmission laser welding using a thin coating of infrared absorbent material at the joint interface. The coating was patterned using electron-beam lithography to obtain the required resolution in a reproducible manner and that resolution was retained after the transmission laser welding process. The joint strength was ratified using larger scale samples. The results demonstrate that plastics products could be made with a high density of structure with resolution below 1 um, and that welding can be applied without excessively heating regions beyond the weld lines. This may be applied to smaller scale sensor and analysis chips, micro-bio and chemical reactors and to microelectronic packaging.

Keywords: microchannels, polymer, EB lithography, laser welding

Procedia PDF Downloads 404
8846 Efficient Study of Substrate Integrated Waveguide Devices

Authors: J. Hajri, H. Hrizi, N. Sboui, H. Baudrand

Abstract:

This paper presents a study of SIW circuits (Substrate Integrated Waveguide) with a rigorous and fast original approach based on Iterative process (WCIP). The theoretical suggested study is validated by the simulation of two different examples of SIW circuits. The obtained results are in good agreement with those of measurement and with software HFSS.

Keywords: convergence study, HFSS, modal decomposition, SIW circuits, WCIP method

Procedia PDF Downloads 500
8845 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers

Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal

Abstract:

Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.

Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test

Procedia PDF Downloads 103
8844 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint

Authors: Juliane Spaak

Abstract:

A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.

Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient

Procedia PDF Downloads 75
8843 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide

Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu

Abstract:

This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.

Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide

Procedia PDF Downloads 242
8842 Using Printouts as Social Media Evidence and Its Authentication in the Courtroom

Authors: Chih-Ping Chang

Abstract:

Different from traditional objective evidence, social media evidence has its own characteristics with easily tampering, recoverability, and cannot be read without using other devices (such as a computer). Simply taking a screenshot from social network sites must be questioned its original identity. When the police search and seizure digital information, a common way they use is to directly print out digital data obtained and ask the signature of the parties at the presence, without taking original digital data back. In addition to the issue on its original identity, this conduct to obtain evidence may have another two results. First, it will easily allege that is tampering evidence because the police wanted to frame the suspect and falsified evidence. Second, it is not easy to discovery hidden information. The core evidence associated with crime may not appear in the contents of files. Through discovery the original file, data related to the file, such as the original producer, creation time, modification date, and even GPS location display can be revealed from hidden information. Therefore, how to show this kind of evidence in the courtroom will be arguably the most important task for ruling social media evidence. This article, first, will introduce forensic software, like EnCase, TCT, FTK, and analyze their function to prove the identity with another digital data. Then turning back to the court, the second part of this article will discuss legal standard for authentication of social media evidence and application of that forensic software in the courtroom. As the conclusion, this article will provide a rethinking, that is, what kind of authenticity is this rule of evidence chase for. Does legal system automatically operate the transcription of scientific knowledge? Or furthermore, it wants to better render justice, not only under scientific fact, but through multivariate debating.

Keywords: federal rule of evidence, internet forensic, printouts as evidence, social media evidence, United States v. Vayner

Procedia PDF Downloads 296
8841 The Usage of Adobe in Historical Structures of Van City

Authors: Mustafa Gülen, Eylem Güzel, Soner Guler

Abstract:

The studies concentrated on the historical background of Van show the fact that Van has had a significant position as a settlement since ancient times and that it has hosted many civilizations during history. With the dominance of Ottoman Empire in 16th century, the region had been re-constructed by building new walls at the southern side of Van Castle. These construction activities had mostly been fulfilled by the usage of adobe which had been a fundamental material for thousands of years. As a result of natural disasters, battles and the move at the threshold of 20th century to the new settlement which is 9 kilometers away from the Ancient City Van is an open-air museum with the ruins of churches, mosques and baths. In this study, the usage of adobe in historical structures of Van city is evaluated in detail.

Keywords: historical structures, adobe, Van city, adobe

Procedia PDF Downloads 615
8840 Development of a Plant-Based Dietary Supplement to Address Critical Micronutrient Needs of Women of Child-Bearing Age in Europe

Authors: Sara D. Garduno-Diaz, Ramona Milcheva, Chanyu Xu

Abstract:

Women’s reproductive stages (pre-pregnancy, pregnancy, and lactation) represent a time of higher micronutrient needs. With a healthy food selection as the first path of choice to cover these increased needs, tandem micronutrient supplementation is often required. Because pregnancy and lactation should be treated with care, all supplements consumed should be of quality ingredients and manufactured through controlled processes. This work describes the process followed for the development of plant-based multiple micronutrient supplements aimed at addressing the growing demand for natural ingredients of non-animal origin. A list of key nutrients for inclusion was prioritized, followed by the identification and selection of qualified raw ingredient providers. Nutrient absorption into the food matrix was carried out through natural processes. The outcome is a new line of products meeting the set criteria of being gluten and lactose-free, suitable for vegans/vegetarians, and without artificial conservatives. In addition, each product provides the consumer with 10 vitamins, 6 inorganic nutrients, 1 source of essential fatty acids, and 1 source of phytonutrients each (maca, moringa, and chlorella). Each raw material, as well as the final product, was submitted to microbiological control three-fold (in-house and external). The final micronutrient mix was then tested for human factor contamination, pesticides, total aerobic microbial count, total yeast count, and total mold count. The product was created with the aim of meeting product standards for the European Union, as well as specific requirements for the German market in the food and pharma fields. The results presented here reach the point of introduction of the newly developed product to the market, with acceptability and effectiveness results to be published at a later date.

Keywords: fertility, lactation, organic, pregnancy, vegetarian

Procedia PDF Downloads 152
8839 Production and Purification of Pectinase by Aspergillus Niger

Authors: M. Umar Dahot, G. S. Mangrio

Abstract:

In this study Agro-industrial waste was used as a carbon source, which is a low cost substrate. Along with this, various sugars and molasses of 2.5% and 5% were investigated as substrate/carbon source for the growth of A.niger and Pectinase production. Different nitrogen sources were also used. An overview of results obtained show that 5% sucrose, 5% molasses and 0.4% (NH4)2SO4 were found the best carbon and nitrogen sources for the production of pectinase by A. niger. The maximum production of pectinase (26.87units/ml) was observed at pH 6.0 after 72 hrs incubation. The optimum temperature for the maximum production of pectinase was achieved at 35ºC when maximum production of pectinase was obtained as 28.25Units/ml.Pectinase enzyme was purified with ammonium sulphate precipitation and dialyzed sample was finally applied on gel filtration chromatography (Sephadex G-100) and Ion Exchange DEAE A-50. The enzyme was purified 2.5 fold by gel chromatography on Sephadex G-100 and Four fractions were obtained, Fraction 1, 2, 4 showed single band while Fraction -3 showed multiple bands on SDS Page electrophoresis. Fraction -3 was pooled, dialyzed and separated on Sephdex A-50 and two fractions 3a and 3b showed single band. The molecular weights of the purified fractions were detected in the range of 33000 ± 2000 and 38000± 2000 Daltons. The purified enzyme was specifically most active with pure pectin, while pectin, Lemon pectin and orange peel given lower activity as compared to (control). The optimum pH and temperature for pectinase activity was found between pH 5.0 and 6.0 and 40°- 50°C, respectively. The enzyme was stable over the pH range 3.0-8.0. The thermostability of was determined and it was observed that the pectinase activity is heat stable and retains activity more than 40% when incubated at 90°C for 10 minutes. The pectinase activity of F3a and F3b was increased with different metal ions. The Pectinase activity was stimulated in the presence of CaCl2 up to 10-30%. ZnSO4, MnSO4 and Mg SO4 showed higher activity in fractions F3a and F3b, which indicates that the pectinase belongs to metalo-enzymes. It is concluded that A. niger is capable to produce pH stable and thermostable pectinase, which can be used for industrial purposes.

Keywords: pectinase, a. niger, production, purification, characterization

Procedia PDF Downloads 416
8838 Experimental Study of Boost Converter Based PV Energy System

Authors: T. Abdelkrim, K. Ben Seddik, B. Bezza, K. Benamrane, Aeh. Benkhelifa

Abstract:

This paper proposes an implementation of boost converter for a resistive load using photovoltaic energy as a source. The model of photovoltaic cell and operating principle of boost converter are presented. A PIC micro controller is used in the close loop control to generate pulses for controlling the converter circuit. To performance evaluation of boost converter, a variation of output voltage of PV panel is done by shading one and two cells.

Keywords: boost converter, microcontroller, photovoltaic power generation, shading cells

Procedia PDF Downloads 882
8837 Rest Behavior and Restoration: Searching for Patterns through a Textual Analysis

Authors: Sandra Christina Gressler

Abstract:

Resting is essentially the physical and mental relaxation. So, can behaviors that go beyond the merely physical relaxation to some extent be understood as a behavior of restoration? Studies on restorative environments emphasize the physical, mental and social benefits that some environments can provide and suggest that activities in natural environments reduce the stress of daily lives, promoting recovery against the daily wear. These studies, though specific in their results, do not unify the different possibilities of restoration. Considering the importance of restorative environments by promoting well-being, this research aims to verify the applicability of the theory on restorative environments in a Brazilian context, inquiring about the environment/behavior of rest. The research sought to achieve its goals by; a) identifying daily ways of how participants interact/connect with nature; b) identifying the resting environments/behavior; c) verifying if rest strategies match the restorative environments suggested by restorative studies; and d) verifying different rest strategies related to time. Workers from different companies in which certain functions require focused attention, and high school students from different schools, participated in this study. An interview was used to collect data and information. The data obtained were compared with studies of attention restoration theory and stress recovery. The collected data were analyzed through the basic descriptive inductive statistics and the use of the software ALCESTE® (Analyse Lexicale par Contexte d'un Ensemble de Segments de Texte). The open questions investigate perception of nature on a daily basis – analysis using ALCESTE; rest periods – daily, weekends and holidays – analysis using ALCESTE with tri-croisé; and resting environments and activities – analysis using a simple descriptive statistics. According to the results, environments with natural characteristics that are compatible with personal desires (physical aspects and distance) and residential environments when they fulfill the characteristics of refuge, safety, and self-expression, characteristics of primary territory, meet the requirements of restoration. Analyzes suggest that the perception of nature has a wide range that goes beyond the objects nearby and possible to be touched, as well as observation and contemplation of details. The restoration processes described in the studies of attention restoration theory occur gradually (hierarchically), starting with being away, following compatibility, fascination, and extent. They are also associated with the time that is available for rest. The relation between rest behaviors and the bio-demographic characteristics of the participants are noted. It reinforces, in studies of restoration, the need to insert not only investigations regarding the physical characteristics of the environment but also behavior, social relationship, subjective reactions, distance and time available. The complexity of the theme indicates the necessity for multimethod studies. Practical contributions provide subsidies for developing strategies to promote the welfare of the population.

Keywords: attention restoration theory, environmental psychology, rest behavior, restorative environments

Procedia PDF Downloads 199
8836 Biomechanical Performance of the Synovial Capsule of the Glenohumeral Joint with a BANKART Lesion through Finite Element Analysis

Authors: Duvert A. Puentes T., Javier A. Maldonado E., Ivan Quintero., Diego F. Villegas

Abstract:

Mechanical Computation is a great tool to study the performance of complex models. An example of it is the study of the human body structure. This paper took advantage of different types of software to make a 3D model of the glenohumeral joint and apply a finite element analysis. The main objective was to study the change in the biomechanical properties of the joint when it presents an injury. Specifically, a BANKART lesion, which consists in the detachment of the anteroinferior labrum from the glenoid. Stress and strain distribution of the soft tissues were the focus of this study. First, a 3D model was made of a joint without any pathology, as a control sample, using segmentation software for the bones with the support of medical imagery and a cadaveric model to represent the soft tissue. The joint was built to simulate a compression and external rotation test using CAD to prepare the model in the adequate position. When the healthy model was finished, it was submitted to a finite element analysis and the results were validated with experimental model data. With the validated model, it was sensitized to obtain the best mesh measurement. Finally, the geometry of the 3D model was changed to imitate a BANKART lesion. Then, the contact zone of the glenoid with the labrum was slightly separated simulating a tissue detachment. With this new geometry, the finite element analysis was applied again, and the results were compared with the control sample created initially. With the data gathered, this study can be used to improve understanding of the labrum tears. Nevertheless, it is important to remember that the computational analysis are approximations and the initial data was taken from an in vitro assay.

Keywords: biomechanics, computational model, finite elements, glenohumeral joint, bankart lesion, labrum

Procedia PDF Downloads 164
8835 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era

Authors: Cagri Baris Kasap

Abstract:

In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.

Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking

Procedia PDF Downloads 147
8834 Lucilia Sericata Netrin-A: Secreted by Salivary Gland Larvae as a Potential to Neuroregeneration

Authors: Hamzeh Alipour, Masoumeh Bagheri, Tahereh Karamzadeh, Abbasali Raz, Kourosh Azizi

Abstract:

Netrin-A, a protein identified for conducting commissural axons, has a similar role in angiogenesis. In addition, studies have shown that one of the netrin-A receptors is expressed in the growing cells of small capillaries. It will be interesting to study this new group of molecules because their role in wound healing will become clearer in the future due to angiogenesis. The greenbottle blowfly Luciliasericata (L. sericata) larvae are increasingly used in maggot therapy of chronic wounds. This aim of this was the identification of moleculareatures of Netrin-A in L. sericata larvae. Larvae were reared under standard maggotarium conditions. The nucleic acid sequence of L. sericataNetrin-A (LSN-A) was then identified using Rapid Amplification of cDNA Ends (RACE) and Rapid Amplification of Genomic Ends (RAGE). Parts of the Netrin-A gene, including the middle, 3′-, and 5′-ends were identified, TA cloned in pTG19 plasmid, and transferred into DH5ɑ Escherichia coli. Each part was sequenced and assembled using SeqMan software. This gene structure was further subjected to in silico analysis. The DNA of LSN-A was identified to be 2407 bp, while its mRNA sequence was recognized as 2115 bp by Oligo0.7 software. It translated the Netrin-A protein with 704 amino acid residues. Its molecular weight is estimated to be 78.6 kDa. The 3-D structure ofNetrin-A drawn by SWISS-MODEL revealed its similarity to the Netrin-1 of humans with 66.8% identity. The LSN-A protein conduces to repair the myelin membrane in neuronal cells. Ultimately, it can be an effective candidate in neural regeneration and wound healing. Furthermore, our next attempt is to deplore recombinant proteins for use in medical sciences.

Keywords: maggot therapy, netrin-A, RACE, RAGE, lucilia sericata

Procedia PDF Downloads 115
8833 Motivation, Legal Knowledge and Preference Investigation of Hungarian Law Students

Authors: Zsofia Patyi

Abstract:

While empirical studies under socialism in Hungary focused on the lawyer society as a whole, current research deals with law students in specific. The change of regime and the mutation of legal education have influenced the motivation, efficiency, social background and self-concept of law students. This shift needs to be acknowledged, and the education system improved for students and together with students. A new law student society requires a different legal education system, different legal studies, or, at the minimum, a different approach to teaching law. This is to ensure that competitive lawyers be trained who understand the constantly changing nature of the law and, as a result, can potentially transform or create legislation themselves. A number of developments can affect law students’ awareness of legal relations in a democratic state. In today’s Hungary, these decisive factors are primarily the new regulation of the financing of law students, and secondly, the new Hungarian constitution (henceforth: Alaptörvény), which has modified the base of the Hungarian legal system. These circumstances necessitate a new, comprehensive, and empirical, investigation of law students. To this end, our research team (comprising a professor, a Ph.D. student, and two law students), is conducting a new type of study in February 2017. The first stage of the research project uses the desktop method to open up the research antecedents. Afterward, a structured questionnaire draft will be designed and sent to the Head of Department of Sociology and the Associate Professor of the Department of Constitutional Law at the University of Szeged to have the draft checked and amended. Next, an open workshop for students and teachers will be organized with the aim to discuss the draft and create the final questionnaire. The research team will then contact each Hungarian university with a Faculty of Law to reach all 1st- and 4th-year law students. 1st-year students have not yet studied the Alaptörvény, while 4th-year students have. All students will be asked to fill in the questionnaire (in February). Results are expected to be in at the end of February. In March, the research team will report the results and present the conclusions. In addition, the results will be compared to previous researches. The outcome will help us answer the following research question: How should legal studies and legal education in Hungary be reformed in accordance with law students and the future lawyer society? The aim of the research is to (1) help create a new student- and career-centered teaching method of legal studies, (2) offer a new perspective on legal education, and (3) create a helpful and useful de lege ferenda proposal for the attorney general as regards legal education as part of higher education.

Keywords: change, constitution, investigation, law students, lawyer society, legal education, legal studies, motivation, reform

Procedia PDF Downloads 270
8832 The Routes of Human Suffering: How Point-Source and Destination-Source Mapping Can Help Victim Services Providers and Law Enforcement Agencies Effectively Combat Human Trafficking

Authors: Benjamin Thomas Greer, Grace Cotulla, Mandy Johnson

Abstract:

Human trafficking is one of the fastest growing international crimes and human rights violations in the world. The United States Department of State (State Department) approximates some 800,000 to 900,000 people are annually trafficked across sovereign borders, with approximately 14,000 to 17,500 of these people coming into the United States. Today’s slavery is conducted by unscrupulous individuals who are often connected to organized criminal enterprises and transnational gangs, extracting huge monetary sums. According to the International Labour Organization (ILO), human traffickers collect approximately $32 billion worldwide annually. Surpassed only by narcotics dealing, trafficking of humans is tied with illegal arms sales as the second largest criminal industry in the world and is the fastest growing field in the 21st century. Perpetrators of this heinous crime abound. They are not limited to single or “sole practitioners” of human trafficking, but rather, often include Transnational Criminal Organizations (TCO), domestic street gangs, labor contractors, and otherwise seemingly ordinary citizens. Monetary gain is being elevated over territorial disputes and street gangs are increasingly operating in a collaborative effort with TCOs to further disguise their criminal activity; to utilizing their vast networks, in an attempt to avoid detection. Traffickers rely on a network of clandestine routes to sell their commodities with impunity. As law enforcement agencies seek to retard the expansion of transnational criminal organization’s entry into human trafficking, it is imperative that they develop reliable trafficking mapping of known exploitative routes. In a recent report given to the Mexican Congress, The Procuraduría General de la República (PGR) disclosed, from 2008 to 2010 they had identified at least 47 unique criminal networking routes used to traffic victims and that Mexico’s estimated domestic victims number between 800,000 adults and 20,000 children annually. Designing a reliable mapping system is a crucial step to effective law enforcement response and deploying a successful victim support system. Creating this mapping analytic is exceedingly difficult. Traffickers are constantly changing the way they traffic and exploit their victims. They swiftly adapt to local environmental factors and react remarkably well to market demands, exploiting limitations in the prevailing laws. This article will highlight how human trafficking has become one of the fastest growing and most high profile human rights violations in the world today; compile current efforts to map and illustrate trafficking routes; and will demonstrate how the proprietary analytical mapping analysis of point-source and destination-source mapping can help local law enforcement, governmental agencies and victim services providers effectively respond to the type and nature of trafficking to their specific geographical locale. Trafficking transcends state and international borders. It demands an effective and consistent cooperation between local, state, and federal authorities. Each region of the world has different impact factors which create distinct challenges for law enforcement and victim services. Our mapping system lays the groundwork for a targeted anti-trafficking response.

Keywords: human trafficking, mapping, routes, law enforcement intelligence

Procedia PDF Downloads 385
8831 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach

Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista

Abstract:

The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.

Keywords: depth, deep learning, geovisualisation, satellite images

Procedia PDF Downloads 18
8830 Impact of Fermentation Time and Microbial Source on Physicochemical Properties, Total Phenols and Antioxidant Activity of Finger Millet Malt Beverage

Authors: Henry O. Udeha, Kwaku G. Duodub, Afam I. O. Jideanic

Abstract:

Finger millet (FM) [Eleusine coracana] is considered as a potential ‘‘super grain’’ by the United States National Academies as one of the most nutritious among all the major cereals. The regular consumption of FM-based diets has been associated with reduced risk of diabetes, cataract and gastrointestinal tract disorder. Hyperglycaemic, hypocholesterolaemic and anticataractogenic, and other health improvement properties have been reported. This study examined the effect of fermentation time and microbial source on physicochemical properties, phenolic compounds and antioxidant activity of two finger millet (FM) malt flours. Sorghum was used as an external reference. The grains were malted, mashed and fermented using the grain microflora and Lactobacillus fermentum. The phenolic compounds of the resulting beverage were identified and quantified using ultra-performance liquid chromatography (UPLC) and mass spectrometer system (MS). A fermentation-time dependent decrease in pH and viscosities of the beverages, with a corresponding increase in sugar content were noted. The phenolic compounds found in the FM beverages were protocatechuic acid, catechin and epicatechin. Decrease in total phenolics of the beverages was observed with increased fermentation time. The beverages exhibited 2, 2-diphenyl-1-picrylhydrazyl, 2, 2՛-azinobis-3-ethylbenzthiazoline-6-sulfonic acid radical scavenging action and iron reducing activities, which were significantly (p < 0.05) reduced at 96 h fermentation for both microbial sources. The 24 h fermented beverages retained a higher amount of total phenolics and had higher antioxidant activity compared to other fermentation periods. The study demonstrates that FM could be utilised as a functional grain in the production of non-alcoholic beverage with important phenolic compounds for health promotion and wellness.

Keywords: antioxidant activity, eleusine coracana, fermentation, phenolic compounds

Procedia PDF Downloads 111
8829 Virtual Reality and Avatars in Education

Authors: Michael Brazley

Abstract:

Virtual Reality (VR) and 3D videos are the most current generation of learning technology today. Virtual Reality and 3D videos are being used in professional offices and Schools now for marketing and education. Technology in the field of design has progress from two dimensional drawings to 3D models, using computers and sophisticated software. Virtual Reality is being used as collaborative means to allow designers and others to meet and communicate inside models or VR platforms using avatars. This research proposes to teach students from different backgrounds how to take a digital model into a 3D video, then into VR, and finally VR with multiple avatars communicating with each other in real time. The next step would be to develop the model where people from three or more different locations can meet as avatars in real time, in the same model and talk to each other. This research is longitudinal, studying the use of 3D videos in graduate design and Virtual Reality in XR (Extended Reality) courses. The research methodology is a combination of quantitative and qualitative methods. The qualitative methods begin with the literature review and case studies. The quantitative methods come by way of student’s 3D videos, survey, and Extended Reality (XR) course work. The end product is to develop a VR platform with multiple avatars being able to communicate in real time. This research is important because it will allow multiple users to remotely enter your model or VR platform from any location in the world and effectively communicate in real time. This research will lead to improved learning and training using Virtual Reality and Avatars; and is generalizable because most Colleges, Universities, and many citizens own VR equipment and computer labs. This research did produce a VR platform with multiple avatars having the ability to move and speak to each other in real time. Major implications of the research include but not limited to improved: learning, teaching, communication, marketing, designing, planning, etc. Both hardware and software played a major role in project success.

Keywords: virtual reality, avatars, education, XR

Procedia PDF Downloads 101
8828 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model

Authors: Danjuma Bawa

Abstract:

This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.

Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics

Procedia PDF Downloads 150
8827 Developing a Model to Objectively Assess the Culture of Individuals and Teams in Order to Effectively and Efficiently Achieve Sustainability in the Manpower

Authors: Ahmed Mohamed Elnady Mohamed Elsafty

Abstract:

This paper explains a developed applied objective model to measure the culture qualitatively and quantitatively, whether in individuals or in teams, in order to be able to use culture correctly or modify it efficiently. This model provides precise measurements and consistent interpretations by being comprehensive, updateable, and protected from being misled by imitations. Methodically, the provided model divides the culture into seven dimensions (total 43 cultural factors): First dimension is outcome-orientation which consists of five factors and should be highest in leaders. Second dimension is details-orientation which consists of eight factors and should be in highest intelligence members. Third dimension is team-orientation which consists of five factors and should be highest in instructors or coaches. Fourth dimension is change-orientation which consists of five factors and should be highest in soldiers. Fifth dimension is people-orientation which consists of eight factors and should be highest in media members. Sixth dimension is masculinity which consists of seven factors and should be highest in hard workers. Last dimension is stability which consists of seven factors and should be highest in soft workers. In this paper, the details of all cultural factors are explained. Practically, information collection about each cultural factor in the targeted person or team is essential in order to calculate the degrees of all cultural factors using the suggested equation of multiplying 'the score of factor presence' by 'the score of factor strength'. In this paper, the details of how to build each score are explained. Based on the highest degrees - to identify which cultural dimension is the prominent - choosing the tested individual or team in the supposedly right position at the right time will provide a chance to use minimal efforts to make everyone aligned to the organization’s objectives. In other words, making everyone self-motivated by setting him/her at the right source of motivation is the most effective and efficient method to achieve high levels of competency, commitment, and sustainability. Modifying a team culture can be achieved by excluding or including new members with relatively high or low degrees in specific cultural factors. For conclusion, culture is considered as the software of the human beings and it is one of the major compression factors on the managerial discretion. It represents the behaviors, attitudes, and motivations of the human resources which are vital to enhance quality and safety, expanding the market share, and defending against attacks from external environments. Thus, it is tremendously essential and useful to use such a comprehensive model to measure, use, and modify culture.

Keywords: culture dimensions, culture factors, culture measurement, cultural analysis, cultural modification, self-motivation, alignment to objectives, competency, sustainability

Procedia PDF Downloads 167
8826 Notes on Frames in Weighted Hardy Spaces and Generalized Weighted Composition Operators

Authors: Shams Alyusof

Abstract:

This work is to enrich the studies of the frames due to their prominent role in pure mathematics as well as in applied mathematics and many applications in computer science and engineering. Recently, there are remarkable studies of operators that preserve frames on some spaces, and this research could be considered as an extension of such studies. Indeed, this paper is to we characterize weighted composition operators that preserve frames in weighted Hardy spaces on the open unit disk. Moreover, it shows that this characterization does not apply to generalized weighted composition operators on such spaces. Nevertheless, this study could be extended to provide more specific characterizations.

Keywords: frames, generalized weighted composition operators, weighted Hardy spaces, analytic functions

Procedia PDF Downloads 127
8825 The Combined Effect of Different Levels of Fe(III) in Diet and Cr(III) Supplementation on the Ca Status in Wistar

Authors: Staniek Halina

Abstract:

The inappropriate trace elements supply such as iron(III) and chromium(III) may be risk factors of many metabolic disorders (e.g., anemia, diabetes, as well cause toxic effect). However, little is known about their mutual interactions and their impact on these disturbances. The effects of Cr(III) supplementation with a deficit or excess supply of Fe(III) in vivo conditions are not known yet. The objective of the study was to investigate the combined effect of different Fe(III) levels in the diet and simultaneous Cr(III) supplementation on the Ca distribution in organs in healthy rats. The assessment was based on a two-factor (2x3) experiment carried out on 54 female Wistar rats (Rattus norvegicus). The animals were randomly divided into 9 groups and for 6 weeks, they were fed semi-purified diets AIN-93 with three different Fe(III) levels in the diet as a factor A [control (C) 45 mg/kg (100% Recommended Daily Allowance for rodents), deficient (D) 5 mg/kg (10% RDA), and oversupply (H) 180 mg/kg (400% RDA)]. The second factor (B) was the simultaneous dietary supplementation with Cr(III) at doses of 1, 50 and 500 mg/kg of the diet. Iron(III) citrate was the source of Fe(III). The complex of Cr(III) with propionic acid, also called Cr₃ or chromium(III) propionate (CrProp), was used as a source of Cr(III) in the diet. The Ca content of analysed samples (liver, kidneys, spleen, heart, and femur) was determined with the Atomic Absorption Spectrometry (AAS) method. It was found that different dietary Fe(III) supply as well as Cr(III) supplementation independently and in combination influenced Ca metabolism in healthy rats. Regardless of the supplementation of Cr(III), the oversupply of Fe(III) (180 mg/kg) decreased the Ca content in the liver and kidneys, while it increased the Ca saturation of bone tissue. High Cr(III) doses lowered the hepatic Ca content. Moreover, it tended to decrease the Ca content in the kidneys and heart, but this effect was not statistically significant. The combined effect of the experimental factors on the Ca content in the liver and the femur was observed. With the increase in the Fe(III) content in the diet, there was a decrease in the Ca level in the liver and an increase in bone saturation, and the additional Cr(III) supplementation intensified those effects. The study proved that the different Fe(III) content in the diet, independently and in combination with Cr(III) supplementation, affected the Ca distribution in organisms of healthy rats.

Keywords: calcium, chromium(III), iron(III), rats, supplementation

Procedia PDF Downloads 200
8824 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 535
8823 Xen45 Gel Implant in Open Angle Glaucoma: Efficacy, Safety and Predictors of Outcome

Authors: Fossarello Maurizio, Mattana Giorgio, Tatti Filippo.

Abstract:

The most widely performed surgical procedure in Open-Angle Glaucoma (OAG) is trabeculectomy. Although this filtering procedure is extremely effective, surgical failure and postoperative complications are reported. Due to the its invasive nature and possible complications, trabeculectomy is usually reserved, in practice, for patients who are refractory to medical and laser therapy. Recently, a number of micro-invasive surgical techniques (MIGS: Micro-Invasive Glaucoma Surgery), have been introduced in clinical practice. They meet the criteria of micro-incisional approach, minimal tissue damage, short surgical time, reliable IOP reduction, extremely high safety profile and rapid post-operative recovery. Xen45 Gel Implant (Allergan, Dublin, Ireland) is one of the MIGS alternatives, and consists in a porcine gelatin tube designed to create an aqueous flow from the anterior chamber to the subconjunctival space, bypassing the resistance of the trabecular meshwork. In this study we report the results of this technique as a favorable option in the treatment of OAG for its benefits in term of efficacy and safety, either alone or in combination with cataract surgery. This is a retrospective, single-center study conducted in consecutive OAG patients, who underwent Xen45 Gel Stent implantation alone or in combination with phacoemulsification, from October 2018 to June 2019. The primary endpoint of the study was to evaluate the reduction of both IOP and number of antiglaucoma medications at 12 months. The secondary endpoint was to correlate filtering bleb morphology evaluated by means of anterior segment OCT with efficacy in IOP lowering and eventual further procedures requirement. Data were recorded on Microsoft Excel and study analysis was performed using Microsoft Excel and SPSS (IBM). Mean values with standard deviations were calculated for IOPs and number of antiglaucoma medications at all points. Kolmogorov-Smirnov test showed that IOP followed a normal distribution at all time, therefore the paired Student’s T test was used to compare baseline and postoperative mean IOP. Correlation between postoperative Day 1 IOP and Month 12 IOP was evaluated using Pearson coefficient. Thirty-six eyes of 36 patients were evaluated. As compared to baseline, mean IOP and the mean number of antiglaucoma medications significantly decreased from 27,33 ± 7,67 mmHg to 16,3 ± 2,89 mmHg (38,8% reduction) and from 2,64 ± 1,39 to 0,42 ± 0,8 (84% reduction), respectively, at 12 months after surgery (both p < 0,001). According to bleb morphology, eyes were divided in uniform group (n=8, 22,2%), subconjunctival separation group (n=5, 13,9%), microcystic multiform group (n=9, 25%) and multiple internal layer group (n=14, 38,9%). Comparing to baseline, there was no significative difference in IOP between the 4 groups at month 12 follow-up visit. Adverse events included bleb function decrease (n=14, 38,9%), hypotony (n=8, 22,2%) and choroidal detachment (n=2, 5,6%). All eyes presenting bleb flattening underwent needling and MMC injection. The higher percentage of patients that required secondary needling was in the uniform group (75%), with a significant difference between the groups (p=0,03). Xen45 gel stent, either alone or in combination with phacoemulsification, provided a significant lowering in both IOP and medical antiglaucoma treatment and an elevated safety profile.

Keywords: anterior segment OCT, bleb morphology, micro-invasive glaucoma surgery, open angle glaucoma, Xen45 gel implant

Procedia PDF Downloads 144
8822 A Study on Characteristics of Runoff Analysis Methods at the Time of Rainfall in Rural Area, Okinawa Prefecture Part 2: A Case of Kohatu River in South Central Part of Okinawa Pref

Authors: Kazuki Kohama, Hiroko Ono

Abstract:

The rainfall in Japan is gradually increasing every year according to Japan Meteorological Agency and Intergovernmental Panel on Climate Change Fifth Assessment Report. It means that the rainfall difference between rainy season and non-rainfall is increasing. In addition, the increasing trend of strong rain for a short time clearly appears. In recent years, natural disasters have caused enormous human injuries in various parts of Japan. Regarding water disaster, local heavy rain and floods of large rivers occur frequently, and it was decided on a policy to promote hard and soft sides as emergency disaster prevention measures with water disaster prevention awareness social reconstruction vision. Okinawa prefecture in subtropical region has torrential rain and water disaster several times a year such as river flood, in which is caused in specific rivers from all 97 rivers. Also, the shortage of capacity and narrow width are characteristic of river in Okinawa and easily cause river flood in heavy rain. This study focuses on Kohatu River that is one of the specific rivers. In fact, the water level greatly rises over the river levee almost once a year but non-damage of buildings around. On the other hand in some case, the water level reaches to ground floor height of house and has happed nine times until today. The purpose of this research is to figure out relationship between precipitation, surface outflow and total treatment water quantity of Kohatu River. For the purpose, we perform hydrological analysis although is complicated and needs specific details or data so that, the method is mainly using Geographic Information System software and outflow analysis system. At first, we extract watershed and then divided to 23 catchment areas to understand how much surface outflow flows to runoff point in each 10 minutes. On second, we create Unit Hydrograph indicating the area of surface outflow with flow area and time. This index shows the maximum amount of surface outflow at 2400 to 3000 seconds. Lastly, we compare an estimated value from Unit Hydrograph to a measured value. However, we found that measure value is usually lower than measured value because of evaporation and transpiration. In this study, hydrograph analysis was performed using GIS software and outflow analysis system. Based on these, we could clarify the flood time and amount of surface outflow.

Keywords: disaster prevention, water disaster, river flood, GIS software

Procedia PDF Downloads 143
8821 A Failure to Strike a Balance: The Use of Parental Mediation Strategies by Foster Carers and Social Workers

Authors: Jennifer E Simpson

Abstract:

Background and purpose: The ubiquitous use of the Internet and social media by children and young people has had a dual effect. The first is to open a world of possibilities and promise that is characterized by the ability to consume and create content, connect with friends, explore and experiment. The second relates to risks such as unsolicited requests, sexual exploitation, cyberbullying and commercial exploitation. This duality poses significant difficulties for a generation of foster carers and social workers who have no childhood experience to draw on in terms of growing up using the Internet, social media and digital devices. This presentation is concerned with the findings of a small qualitative study about the use of digital devices and the Internet by care-experienced young people to stay in touch with their families and the way this was managed by foster carers and social workers using specific parental mediation strategies. The findings highlight that restrictive strategies were used by foster carers and endorsed by social workers. An argument is made for an approach that develops a series of balanced solutions that move foster carers from such restrictive approaches to those that are grounded in co-use and are interpretive in nature. Methods: Using a purposive sampling strategy, 12 triads consisting of care-experienced young people (aged 13-18 years), their foster carers and allocated social workers were recruited. All respondents undertook a semi-structured interview, with the young people detailing what social media apps and other devices they used to contact their families via an Ecomap. The foster carers and social workers shared details of the methods and approaches they used to manage digital devices and the Internet in general. Data analysis was performed using a Framework analytic method to explore the various attitudes, as well as complementary and contradictory perspectives of the young people, their foster carers and allocated social workers. Findings: The majority of foster carers made use of parental mediation strategies that erred on the side of typologies that included setting rules and regulations (restrictive), ad-hoc checking of a young person’s behavior and device (monitoring), and software used to limit or block access to inappropriate websites (technical). It was noted that minimal use was made by foster carers of parental mediation strategies that included talking about content (active/interpretive) or sharing Internet activities (co-use). Amongst the majority of the social workers, they also had a strong preference for restrictive approaches. Conclusions and implications: Trepidations on the part of both foster carers and social workers about the use of digital devices and the Internet meant that the parental strategies used were weighted more towards restriction, with little use made of approaches such as co-use and interpretative. This lack of balance calls for solutions that are grounded in co-use and an interpretive approach, both of which can be achieved through training and support, as well as wider policy change.

Keywords: parental mediation strategies, risk, children in state care, online safety

Procedia PDF Downloads 78
8820 The Determination of the Phosphorous Solubility in the Iron by the Function of the Other Components

Authors: Andras Dezső, Peter Baumli, George Kaptay

Abstract:

The phosphorous is the important components in the steels, because it makes the changing of the mechanical properties and possibly modifying the structure. The phosphorous can be create the Fe3P compounds, what is segregated in the ferrite grain boundary in the intervals of the nano-, or microscale. This intermetallic compound is decreasing the mechanical properties, for example it makes the blue brittleness which means that the brittle created by the segregated particles at 200 ... 300°C. This work describes the phosphide solubility by the other components effect. We make calculations for the Ni, Mo, Cu, S, V, C, Si, Mn, and the Cr elements by the Thermo-Calc software. We predict the effects by approximate functions. The binary Fe-P system has a solubility line, which has a determinating equation. The result is below: lnwo = -3,439 – 1.903/T where the w0 means the weight percent of the maximum soluted concentration of the phosphorous, and the T is the temperature in Kelvin. The equation show that the P more soluble element when the temperature increasing. The nickel, molybdenum, vanadium, silicon, manganese, and the chromium make dependence to the maximum soluted concentration. These functions are more dependent by the elements concentration, which are lower when we put these elements in our steels. The copper, sulphur and carbon do not make effect to the phosphorous solubility. We predict that all of cases the maximum solubility concentration increases when the temperature more and more high. Between 473K and 673 K, in the phase diagram, these systems contain mostly two or three phase eutectoid, and the singe phase, ferritic intervals. In the eutectoid areas the ferrite, the iron-phosphide, and the metal (III)-phospide are in the equilibrium. In these modelling we predicted that which elements are good for avoid the phosphide segregation or not. These datas are important when we make or choose the steels, where the phosphide segregation stopping our possibilities.

Keywords: phosphorous, steel, segregation, thermo-calc software

Procedia PDF Downloads 629
8819 Parabolic Impact Law of High Frequency Exchanges on Price Formation in Commodities Market

Authors: L. Maiza, A. Cantagrel, M. Forestier, G. Laucoin, T. Regali

Abstract:

Evaluation of High Frequency Trading (HFT) impact on financial markets is very important for traders who use market analysis to detect winning transaction opportunity. Analysis of HFT data on tobacco commodity market is discussed here and interesting linear relationship has been shown between trading frequency and difference between averaged trading prices above and below considered trading frequency. This may open new perspectives on markets data understanding and could provide possible interpretation of Adam Smith invisible hand.

Keywords: financial market, high frequency trading, analysis, impacts, Adam Smith invisible hand

Procedia PDF Downloads 362