Search results for: two dimensional radiation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3440

Search results for: two dimensional radiation

290 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 306
289 Predicting the Effect of Vibro Stone Column Installation on Performance of Reinforced Foundations

Authors: K. Al Ammari, B. G. Clarke

Abstract:

Soil improvement using vibro stone column techniques consists of two main parts: (1) the installed load bearing columns of well-compacted, coarse-grained material and (2) the improvements to the surrounding soil due to vibro compaction. Extensive research work has been carried out over the last 20 years to understand the improvement in the composite foundation performance due to the second part mentioned above. Nevertheless, few of these studies have tried to quantify some of the key design parameters, namely the changes in the stiffness and stress state of the treated soil, or have consider these parameters in the design and calculation process. Consequently, empirical and conservative design methods are still being used by ground improvement companies with a significant variety of results in engineering practice. Two-dimensional finite element study to develop an axisymmetric model of a single stone column reinforced foundation was performed using PLAXIS 2D AE to quantify the effect of the vibro installation of this column in soft saturated clay. Settlement and bearing performance were studied as an essential part of the design and calculation of the stone column foundation. Particular attention was paid to the large deformation in the soft clay around the installed column caused by the lateral expansion. So updated mesh advanced option was taken in the analysis. In this analysis, different degrees of stone column lateral expansions were simulated and numerically analyzed, and then the changes in the stress state, stiffness, settlement performance and bearing capacity were quantified. It was found that application of radial expansion will produce a horizontal stress in the soft clay mass that gradually decrease as the distance from the stone column axis increases. The excess pore pressure due to the undrained conditions starts to dissipate immediately after finishing the column installation, allowing the horizontal stress to relax. Changes in the coefficient of the lateral earth pressure K ٭, which is very important in representing the stress state, and the new stiffness distribution in the reinforced clay mass, were estimated. More encouraging results showed that increasing the expansion during column installation has a noticeable effect on improving the bearing capacity and reducing the settlement of reinforced ground, So, a design method should include this significant effect of the applied lateral displacement during the stone column instillation in simulation and numerical analysis design.

Keywords: bearing capacity, design, installation, numerical analysis, settlement, stone column

Procedia PDF Downloads 358
288 Individual Cylinder Ignition Advance Control Algorithms of the Aircraft Piston Engine

Authors: G. Barański, P. Kacejko, M. Wendeker

Abstract:

The impact of the ignition advance control algorithms of the ASz-62IR-16X aircraft piston engine on a combustion process has been presented in this paper. This aircraft engine is a nine-cylinder 1000 hp engine with a special electronic control ignition system. This engine has two spark plugs per cylinder with an ignition advance angle dependent on load and the rotational speed of the crankshaft. Accordingly, in most cases, these angles are not optimal for power generated. The scope of this paper is focused on developing algorithms to control the ignition advance angle in an electronic ignition control system of an engine. For this type of engine, i.e. radial engine, an ignition advance angle should be controlled independently for each cylinder because of the design of such an engine and its crankshaft system. The ignition advance angle is controlled in an open-loop way, which means that the control signal (i.e. ignition advance angle) is determined according to the previously developed maps, i.e. recorded tables of the correlation between the ignition advance angle and engine speed and load. Load can be measured by engine crankshaft speed or intake manifold pressure. Due to a limited memory of a controller, the impact of other independent variables (such as cylinder head temperature or knock) on the ignition advance angle is given as a series of one-dimensional arrays known as corrective characteristics. The value of the ignition advance angle specified combines the value calculated from the primary characteristics and several correction factors calculated from correction characteristics. Individual cylinder control can proceed in line with certain indicators determined from pressure registered in a combustion chamber. Control is assumed to be based on the following indicators: maximum pressure, maximum pressure angle, indicated mean effective pressure. Additionally, a knocking combustion indicator was defined. Individual control can be applied to a single set of spark plugs only, which results from two fundamental ideas behind designing a control system. Independent operation of two ignition control systems – if two control systems operate simultaneously. It is assumed that the entire individual control should be performed for a front spark plug only and a rear spark plug shall be controlled with a fixed (or specific) offset relative to the front one or from a reference map. The developed algorithms will be verified by simulation and engine test sand experiments. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: algorithm, combustion process, radial engine, spark plug

Procedia PDF Downloads 270
287 A Simulation-Based Investigation of the Smooth-Wall, Radial Gravity Problem of Granular Flow through a Wedge-Shaped Hopper

Authors: A. F. Momin, D. V. Khakhar

Abstract:

Granular materials consist of particulate particles found in nature and various industries that, due to gravity flow, behave macroscopically like liquids. A fundamental industrial unit operation is a hopper with inclined walls or a converging channel in which material flows downward under gravity and exits the storage bin through the bottom outlet. The simplest form of the flow corresponds to a wedge-shaped, quasi-two-dimensional geometry with smooth walls and radially directed gravitational force toward the apex of the wedge. These flows were examined using the Mohr-Coulomb criterion in the classic work of Savage (1965), while Ravi Prakash and Rao used the critical state theory (1988). The smooth-wall radial gravity (SWRG) wedge-shaped hopper is simulated using the discrete element method (DEM) to test existing theories. DEM simulations involve the solution of Newton's equations, taking particle-particle interactions into account to compute stress and velocity fields for the flow in the SWRG system. Our computational results are consistent with the predictions of Savage (1965) and Ravi Prakash and Rao (1988), except for the region near the exit, where both viscous and frictional effects are present. To further comprehend this behaviour, a parametric analysis is carried out to analyze the rheology of wedge-shaped hoppers by varying the orifice diameter, wedge angle, friction coefficient, and stiffness. The conclusion is that velocity increases as the flow rate increases but decreases as the wedge angle and friction coefficient increase. We observed no substantial changes in velocity due to varying stiffness. It is anticipated that stresses at the exit result from the transfer of momentum during particle collisions; for this reason, relationships between viscosity and shear rate are shown, and all data are collapsed into a single curve. In addition, it is demonstrated that viscosity and volume fraction exhibit power law correlations with the inertial number and that all the data collapse into a single curve. A continuum model for determining granular flows is presented using empirical correlations.

Keywords: discrete element method, gravity flow, smooth-wall, wedge-shaped hoppers

Procedia PDF Downloads 62
286 Steps of the Pancreatic Differentiation in the Grass Snake (Natrix natrix) Embryos

Authors: Magdalena Kowalska, Weronika Rupik

Abstract:

The pancreas is an important organ present in all vertebrate species. It contains two different tissues, exocrine and endocrine, that act as two glands in one. The development and differentiation of the pancreas in reptiles is poorly known in comparison to other vertebrates. Therefore, the aim of this study was to investigate the particular steps concerning the differentiation of the pancreas in the grass snake (Natrix natrix) embryos. For this, histological methods (including hematoxylin and eosin, and Heidenhain's AZAN staining), transmission electron microscopy and three-dimensional (3D) reconstructions from serial paraffin sections were used. The results of this study indicated that the first step of pancreas development in Natrix was the connection of the two pancreatic buds: dorsal and ventral one. Then, duct walls in both buds started to be remodeled from the multilayered to single-layered epithelium. This remodeling started in the dorsal bud and was simultaneously with the differentiation of the duct lumens which occurred by the cavition. During this process, the cells that had no contact with the mesenchyme underwent cell death named anoikis. These findings indicated that the walls of ducts in the embryonic pancreas of the grass snake were initially formed by the abundant principal and single endocrine cells. Later the basal and goblet cells differentiated. Among the endocrine cells, as the first the B and A cells differentiated, then the D and PP cells. The next step of the pancreatic development was the withdrawing of the endocrine cells from the duct walls to form the pancreatic islets. The endocrine cells and islets were found only in the dorsal part of the pancreas in Natrix embryos what is different than in other vertebrate species. The islets were formed mainly by the A cells. Simultaneously, with the differentiation of the endocrine pancreas, the acinar tissue started to differentiate. The source of the acinar cells were pancreatic ducts similar as in other vertebrates. The acini formation began at the proximal part of the pancreas and went towards the caudal direction. Differentiating pancreatic ducts developed into the branched system that can be divided into extralobular, intralobular, and intercalated ducts, similarly as in other vertebrate species. However, the pattern of branching was different. In conclusions, particular steps of the pancreas differentiation in the grass snake were different than in other vertebrates. It can be supposed that these differences are related to the specific topography of the snake’s internal organs and their taxonomy position. All specimens used in the study were captured according to the Polish regulations concerning the protection of wild species. Permission was granted by the Local Ethics Commission in Katowice (41/2010; 87/2015) and the Regional Directorate for Environmental Protection in Katowice (WPN.6401.257.2015.DC).

Keywords: embryogenesis, organogenesis, pancreas, Squamata

Procedia PDF Downloads 147
285 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance

Authors: Xiaoyong He

Abstract:

The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.

Keywords: graphene, metamaterials, terahertz, tunable

Procedia PDF Downloads 324
284 Preparation of IPNs and Effect of Swift Heavy Ions Irradiation on their Physico-Chemical Properties

Authors: B. S Kaith, K. Sharma, V. Kumar, S. Kalia

Abstract:

Superabsorbent are three-dimensional networks of linear or branched polymeric chains which can uptake large volume of biological fluids. The ability is due to the presence of functional groups like –NH2, -COOH and –OH. Such cross-linked products based on natural materials, such as cellulose, starch, dextran, gum and chitosan, because of their easy availability, low production cost, non-toxicity and biodegradability have attracted the attention of Scientists and Technologists all over the world. Since natural polymers have better biocompatibility and are non-toxic than most synthetic one, therefore, such materials can be applied in the preparation of controlled drug delivery devices, biosensors, tissue engineering, contact lenses, soil conditioning, removal of heavy metal ions and dyes. Gums are natural potential antioxidants and are used as food additives. They have excellent properties like high solubility, pH stability, non-toxicity and gelling characteristics. Till date lot of methods have been applied for the synthesis and modifications of cross-linked materials with improved properties suitable for different applications. It is well known that ion beam irradiation can play a crucial role to synthesize, modify, crosslink or degrade polymeric materials. High energetic heavy ions irradiation on polymer film induces significant changes like chain scission, cross-linking, structural changes, amorphization and degradation in bulk. Various researchers reported the effects of low and heavy ion irradiation on the properties of polymeric materials and observed significant improvement in optical, electrical, chemical, thermal and dielectric properties. Moreover, modifications induced in the materials mainly depend on the structure, the ion beam parameters like energy, linear energy transfer, fluence, mass, charge and the nature of the target material. Ion-beam irradiation is a useful technique for improving the surface properties of biodegradable polymers without missing the bulk properties. Therefore, a considerable interest has been grown to study the effects of SHIs irradiation on the properties of synthesized semi-IPNs and IPNs. The present work deals with the preparation of semi-IPNs and IPNs and impact of SHI like O7+ and Ni9+ irradiation on optical, chemical, structural, morphological and thermal properties along with impact on different applications. The results have been discussed on the basis of Linear Energy Transfer (LET) of the ions.

Keywords: adsorbent, gel, IPNs, semi-IPNs

Procedia PDF Downloads 347
283 A Reduced Ablation Model for Laser Cutting and Laser Drilling

Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz

Abstract:

In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.

Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling

Procedia PDF Downloads 194
282 Porcelain Paste Processing by Robocasting 3D: Parameters Tuning

Authors: A. S. V. Carvalho, J. Luis, L. S. O. Pires, J. M. Oliveira

Abstract:

Additive manufacturing technologies (AM) experienced a remarkable growth in the latest years due to the development and diffusion of a wide range of three-dimensional (3D) printing techniques. Nowadays we can find techniques available for non-industrial users, like fused filament fabrication, but techniques like 3D printing, polyjet, selective laser sintering and stereolithography are mainly spread in the industry. Robocasting (R3D) shows a great potential due to its ability to shape materials with a wide range of viscosity. Industrial porcelain compositions showing different rheological behaviour can be prepared and used as candidate materials to be processed by R3D. The use of this AM technique in industry is very residual. In this work, a specific porcelain composition with suitable rheological properties will be processed by R3D, and a systematic study of the printing parameters tuning will be shown. The porcelain composition was formulated based on an industrial spray dried porcelain powder. The powder particle size and morphology was analysed. The powders were mixed with water and an organic binder on a ball mill at 200 rpm/min for 24 hours. The batch viscosity was adjusted by the addition of an acid solution and mixed again. The paste density, viscosity, zeta potential, particle size distribution and pH were determined. In a R3D system, different speed and pressure settings were studied to access their impact on the fabrication of porcelain models. These models were dried at 80 °C, during 24 hours and sintered in air at 1350 °C for 2 hours. The stability of the models, its walls and surface quality were studied and their physical properties were accessed. The microstructure and layer adhesion were observed by SEM. The studied processing parameters have a high impact on the models quality. Moreover, they have a high impact on the stacking of the filaments. The adequate tuning of the parameters has a huge influence on the final properties of the porcelain models. This work contributes to a better assimilation of AM technologies in ceramic industry. Acknowledgments: The RoboCer3D project – project of additive rapid manufacturing through 3D printing ceramic material (POCI-01-0247-FEDER-003350) financed by Compete 2020, PT 2020, European Regional Development Fund – FEDER through the International and Competitive Operational Program (POCI) under the PT2020 partnership agreement.

Keywords: additive manufacturing, porcelain, robocasting, R3D

Procedia PDF Downloads 142
281 The Asymptotic Hole Shape in Long Pulse Laser Drilling: The Influence of Multiple Reflections

Authors: Torsten Hermanns, You Wang, Stefan Janssen, Markus Niessen, Christoph Schoeler, Ulrich Thombansen, Wolfgang Schulz

Abstract:

In long pulse laser drilling of metals, it can be demonstrated that the ablation shape approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from ultra short pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in long pulse drilling of metals is identified, a model for the description of the asymptotic hole shape numerically implemented, tested and clearly confirmed by comparison with experimental data. The model assumes a robust process in that way that the characteristics of the melt flow inside the arising melt film does not change qualitatively by changing the laser or processing parameters. Only robust processes are technically controllable and thus of industrial interest. The condition for a robust process is identified by a threshold for the mass flow density of the assist gas at the hole entrance which has to be exceeded. Within a robust process regime the melt flow characteristics can be captured by only one model parameter, namely the intensity threshold. In analogy to USP ablation (where it is already known for a long time that the resulting hole shape results from a threshold for the absorbed laser fluency) it is demonstrated that in the case of robust long pulse ablation the asymptotic shape forms in that way that along the whole contour the absorbed heat flux density is equal to the intensity threshold. The intensity threshold depends on the special material and radiation properties and has to be calibrated be one reference experiment. The model is implemented in a numerical simulation which is called AsymptoticDrill and requires such a few amount of resources that it can run on common desktop PCs, laptops or even smart devices. Resulting hole shapes can be calculated within seconds what depicts a clear advantage over other simulations presented in literature in the context of industrial every day usage. Against this background the software additionally is equipped with a user-friendly GUI which allows an intuitive usage. Individual parameters can be adjusted using sliders while the simulation result appears immediately in an adjacent window. A platform independent development allow a flexible usage: the operator can use the tool to adjust the process in a very convenient manner on a tablet during the developer can execute the tool in his office in order to design new processes. Furthermore, at the best knowledge of the authors AsymptoticDrill is the first simulation which allows the import of measured real beam distributions and thus calculates the asymptotic hole shape on the basis of the real state of the specific manufacturing system. In this paper the emphasis is placed on the investigation of the effect of multiple reflections on the asymptotic hole shape which gain in importance when drilling holes with large aspect ratios.

Keywords: asymptotic hole shape, intensity threshold, long pulse laser drilling, robust process

Procedia PDF Downloads 192
280 The Characterization and Optimization of Bio-Graphene Derived From Oil Palm Shell Through Slow Pyrolysis Environment and Its Electrical Conductivity and Capacitance Performance as Electrodes Materials in Fast Charging Supercapacitor Application

Authors: Nurhafizah Md. Disa, Nurhayati Binti Abdullah, Muhammad Rabie Bin Omar

Abstract:

This research intends to identify the existing knowledge gap because of the lack of substantial studies to fabricate and characterize bio-graphene created from Oil Palm Shell (OPS) through the means of pre-treatment and slow pyrolysis. By fabricating bio-graphene through OPS, a novel material can be found to procure and used for graphene-based research. The characterization of produced bio-graphene is intended to possess a unique hexagonal graphene pattern and graphene properties in comparison to other previously fabricated graphene. The OPS will be fabricated by pre-treatment of zinc chloride (ZnCl₂) and iron (III) chloride (FeCl3), which then induced the bio-graphene thermally by slow pyrolysis. The pyrolizer's final temperature and resident time will be set at 550 °C, 5/min, and 1 hour respectively. Finally, the charred product will be washed with hydrochloric acid (HCL) to remove metal residue. The obtained bio-graphene will undergo different analyses to investigate the physicochemical properties of the two-dimensional layer of carbon atoms with sp2 hybridization hexagonal lattice structure. The analysis that will be taking place is Raman Spectroscopy (RAMAN), UV-visible spectroscopy (UV-VIS), Transmission Electron Microscopy (TEM), Scanning Electron Microscopy (SEM), and X-Ray Diffraction (XRD). In retrospect, RAMAN is used to analyze three key peaks found in graphene, namely D, G, and 2D peaks, which will evaluate the quality of the bio-graphene structure and the number of layers generated. To compare and strengthen graphene layer resolves, UV-VIS may be used to establish similar results of graphene layer from last layer analysis and also characterize the types of graphene procured. A clear physical image of graphene can be obtained by analyzation of TEM in order to study structural quality and layers condition and SEM in order to study the surface quality and repeating porosity pattern. Lastly, establishing the crystallinity of the produced bio-graphene, simultaneously as an oxygen contamination factor and thus pristineness of the graphene can be done by XRD. In the conclusion of this paper, this study is able to obtain bio-graphene through OPS as a novel material in pre-treatment by chloride ZnCl₂ and FeCl3 and slow pyrolization to provide a characterization analysis related to bio-graphene that will be beneficial for future graphene-related applications. The characterization should yield similar findings to previous papers as to confirm graphene quality.

Keywords: oil palm shell, bio-graphene, pre-treatment, slow pyrolysis

Procedia PDF Downloads 58
279 Created Duration and Stillness: Chinese Director Zhang Ming Images to Matrophobia Dreamland in Films

Authors: Sicheng Liu

Abstract:

Zhang Ming is a never-A-listed writer-director in China who is famous for his poetic art-house filmmaking in mainland China, and his complex to spectacles of tiny places in south China. Entirely, Zhang’s works concentrate on the interconnection amongst settlement images, desirable fictional storytelling, and the dilemma of alienated interpersonal relationships. Zhang uses his pendulous camerawork to reconstruct the spectacles of his hometown and detached places in northern China, such as hometown Wushan county, lower-tier cities or remote areas that close to nature, where the old spectacles are experiencing great transformation and vanishment. Under his camera, the cities' geo-cultural and geopolitical implications which are not only a symbolic meaning that these places are not only settlements for residents to live but also representations to the abstraction of time-lapse, dimensional disorientation and revealment to people’s innerness. Zhang Ming is good at creating the essay-like expression, poetic atmosphere and vague metaphors in films, so as to show the sensitivity, aimlessness and slight anxiety of Chinese wenren (intellectuals), whose unique and objective experiences to a few aspects inside or outside their the living circumstance, typically for example, transformation of the environment, obscure expression to inner desire and aspirations, personal loneliness because of being isolated, slight anxiety to the uncertainty of life, and other mental dilemma brought by maladjustment. Also, Zhang’s works impressed the audience as slow cinemas, via creating stillness, complicity and fluidity of images and sound, by decompressing liner time passing and wandering within the enclosed loopback-space with his camera, so as to produce poeticized depiction and mysterious dimensions in films. This paper aims to summarize these mentioned features of Zhang’s films, by analyzing filmic texts and film-making styles, in order to prove an outcome that as a wenren-turned-filmmaker, Zhang Ming is good at use metaphor to create an artistic situation to depict the poetry in films and portray characteristics. In addition to this, Zhang Ming’s style relatively reflects some aesthetic features of Chinese wenren cinema.

Keywords: Chinese wenren cinema, intellectuals’ awareness, slow cinema,  slowness and dampness, people and environment

Procedia PDF Downloads 175
278 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method

Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov

Abstract:

The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.

Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection

Procedia PDF Downloads 188
277 The Algerian Experience in Developing Higher Education in the Country in Light of Modern Technology: Challenges and Prospects

Authors: Mohammed Messaoudi

Abstract:

The higher education sector in Algeria has witnessed in recent years a remarkable transformation, as it witnessed the integration of institutions within the modern technological environment and harnessing all appropriate mechanisms to raise the level of education and the level of training. Observers and those interested that it is necessary for the Algerian university to enter this field, especially with the efforts that seek to employ modern technology in the sector and encourage investment in this field, in addition to the state’s keenness to move towards building a path to benefit from modern technology, and to encourage energies in light of a reality that carries many Aspirations and challenges by achieving openness to the new digital environment and keeping pace with the ranks of international universities. Higher education is one of the engines of development for societies, as it is a vital field for the transfer of knowledge and scientific expertise, and the university is at the top of the comprehensive educational system for various disciplines in light of the achievement of a multi-dimensional educational system, and amid the integration of three basic axes that establish the sound educational process (teaching, research, relevant outputs efficiency), and according to a clear strategy that monitors the advancement of academic work, and works on developing its future directions to achieve development in this field. The Algerian University is considered one of the service institutions that seeks to find the optimal mechanisms to keep pace with the changes of the times, as it has become necessary for the university to enter the technological space and thus ensure the quality of education in it and achieve the required empowerment by dedicating a structure that matches the requirements of the challenges on which the sector is based, amid unremitting efforts to develop the capabilities. He sought to harness the mechanisms of communication and information technology and achieve transformation at the level of the higher education sector with what is called higher education technology. The conceptual framework of information and communication technology at the level of higher education institutions in Algeria is determined through the factors of organization, factors of higher education institutions, characteristics of the professor, characteristics of students, the outcomes of the educational process, and there is a relentless pursuit to achieve a positive interaction between these axes as they are basic components on which the success and achievement of higher education are based for his goals.

Keywords: Information and communication technology, Algerian university, scientific and cognitive development, challenges

Procedia PDF Downloads 62
276 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism

Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran

Abstract:

Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.

Keywords: CT PA, D dimer, pulmonary embolism, wells score

Procedia PDF Downloads 196
275 Groundwater Numerical Modeling, an Application of Remote Sensing, and GIS Techniques in South Darb El Arbaieen, Western Desert, Egypt

Authors: Abdallah M. Fayed

Abstract:

The study area is located in south Darb El Arbaieen, western desert of Egypt. It occupies the area between latitudes 22° 00/ and 22° 30/ North and Longitudes 29° 30/ and 30° 00/ East, from southern border of Egypt to the area north Bir Kuraiym and from the area East of East Owienat to the area west Tushka district, its area about 2750 Km2. The famous features; southern part of Darb El Arbaieen road, G Baraqat El Scab El Qarra, Bir Dibis, Bir El Shab and Bir Kuraiym, Interpretation of soil stratification shows layers that are related to Quaternary and Upper-Lower Cretaceous eras. It is dissected by a series of NE-SW striking faults. The regional groundwater flow direction is in SW-NE direction with a hydraulic gradient is 1m / 2km. Mathematical model program has been applied for evaluation of groundwater potentials in the main Aquifer –Nubian Sandstone- in the area of study and Remote sensing technique is considered powerful, accurate and saving time in this respect. These techniques are widely used for illustrating and analysis different phenomenon such as the new development in the desert (land reclamation), residential development (new communities), urbanization, etc. The major issues concerning water development objective of this work is to determine the new development areas in western desert of Egypt during the period from 2003 to 2015 using remote sensing technique, the impacts of the present and future development have been evaluated by using the two-dimensional numerical groundwater flow Simulation Package (visual modflow 4.2). The package was used to construct and calibrate a numerical model that can be used to simulate the response of the aquifer in the study area under implementing different management alternatives in the form of changes in piezometric levels and salinity. Total period of simulation is 100 years. After steady state calibration, two different scenarios are simulated for groundwater development. 21 production wells are installed at the study area and used in the model, with the total discharge for the two scenarios were 105000 m3/d, 210000 m3/d. The drawdown was 11.8 m and 23.7 m for the two scenarios in the end of 100 year. Contour maps for water heads and drawdown and hydrographs for piezometric head are represented. The drawdown was less than the half of the saturated thickness (the safe yield case).

Keywords: remote sensing, management of aquifer systems, simulation modeling, western desert, South Darb El Arbaieen

Procedia PDF Downloads 376
274 Evaluating Social Sustainability in Historical City Center in Turkey: Case Study of Bursa

Authors: Şeyda Akçalı

Abstract:

This study explores the concept of social sustainability and its characteristics in terms of neighborhood (mahalle) which is a social phenomenon in Turkish urban life. As social sustainability indicators that moving away traditional themes toward multi-dimensional measures, the solutions for urban strategies may be achieved through learning lessons from historical precedents. It considers the inherent values of traditional urban forms contribute to the evolution of the city as well as the social functions of it. The study aims to measure non-tangible issues in order to evaluate social sustainability in historic urban environments and how they could contribute to the current urban planning strategies. The concept of neighborhood (mahalle) refers to a way of living that represents the organization of Turkish social and communal life rather than defining an administrative unit for the city. The distinctive physical and social features of neighborhood illustrate the link between social sustainability and historic urban environment. Instead of having a nostalgic view of past, it identifies both the failures and successes and extract lessons of traditional urban environments and adopt them to modern context. First, the study determines the aspects of social sustainability which are issued as the key themes in the literature. Then, it develops a model by describing the social features of mahalle which show consistency within the social sustainability agenda. The model is used to analyze the performance of traditional housing area in the historical city center of Bursa, Turkey whether it meets the residents’ social needs and contribute collective functioning of the community. Through a questionnaire survey exercised in the historic neighborhoods, the residents are evaluated according to social sustainability criteria of neighborhood. The results derived from the factor analysis indicate that social aspects of neighborhood are social infrastructure, identity, attachment, neighborliness, safety and wellbeing. Qualitative evaluation shows the relationship between key aspects of social sustainability and demographic and socio-economic factors. The outcomes support that inherent values of neighborhood retain its importance for the sustainability of community although there must be some local arrangements for few factors with great attention not to compromise the others. The concept of neighborhood should be considered as a potential tool to support social sustainability in national political agenda and urban policies. The performance of underlying factors in historic urban environment proposes a basis for both examining and improving traditional urban areas and how it may contribute to the overall city.

Keywords: historical city center, mahalle, neighborhood, social sustainability, traditional urban environment, Turkey

Procedia PDF Downloads 264
273 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis

Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante

Abstract:

The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.

Keywords: dynamic analysis, long short-term memory, prediction, sepsis

Procedia PDF Downloads 99
272 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 88
271 Analysis of Waterjet Propulsion System for an Amphibious Vehicle

Authors: Nafsi K. Ashraf, C. V. Vipin, V. Anantha Subramanian

Abstract:

This paper reports the design of a waterjet propulsion system for an amphibious vehicle based on circulation distribution over the camber line for the sections of the impeller and stator. In contrast with the conventional waterjet design, the inlet duct is straight for water entry parallel and in line with the nozzle exit. The extended nozzle after the stator bowl makes the flow more axial further improving thrust delivery. Waterjet works on the principle of volume flow rate through the system and unlike the propeller, it is an internal flow system. The major difference between the propeller and the waterjet occurs at the flow passing the actuator. Though a ducted propeller could constitute the equivalent of waterjet propulsion, in a realistic situation, the nozzle area for the Waterjet would be proportionately larger to the inlet area and propeller disc area. Moreover, the flow rate through impeller disk is controlled by nozzle area. For these reasons the waterjet design is based on pump systems rather than propellers and therefore it is important to bring out the characteristics of the flow from this point of view. The analysis is carried out using computational fluid dynamics. Design of waterjet propulsion is carried out adapting the axial flow pump design and performance analysis was done with three-dimensional computational fluid dynamics (CFD) code. With the varying environmental conditions as well as with the necessity of high discharge and low head along with the space confinement for the given amphibious vehicle, an axial pump design is suitable. The major problem of inlet velocity distribution is the large variation of velocity in the circumferential direction which gives rise to heavy blade loading that varies with time. The cavitation criteria have also been taken into account as per the hydrodynamic pump design. Generally, waterjet propulsion system can be parted into the inlet, the pump, the nozzle and the steering device. The pump further comprises an impeller and a stator. Analytical and numerical approaches such as RANSE solver has been undertaken to understand the performance of designed waterjet propulsion system. Unlike in case of propellers the analysis was based on head flow curve with efficiency and power curves. The modeling of the impeller is performed using rigid body motion approach. The realizable k-ϵ model has been used for turbulence modeling. The appropriate boundary conditions are applied for the domain, domain size and grid dependence studies are carried out.

Keywords: amphibious vehicle, CFD, impeller design, waterjet propulsion

Procedia PDF Downloads 194
270 Engineering Photodynamic with Radioactive Therapeutic Systems for Sustainable Molecular Polarity: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces Luhmann’s autopoietic social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. A specific type of autopoietic system is explained in the three existing groups of the ecological phenomena: interaction, social and medical sciences. This hypothesis model, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for the exchange of photon energy with molecular without any changes in topology. The external forces in the systems environment might be concomitant with the natural fluctuations’ influence (e.g. radioactive radiation, electromagnetic waves). The cantilever sensor deploys insights to the future chip processor for prevention of social metabolic systems. Thus, the circuits with resonant electric and optical properties are prototyped on board as an intra–chip inter–chip transmission for producing electromagnetic energy approximately ranges from 1.7 mA at 3.3 V to service the detection in locomotion with the least significant power losses. Nowadays, therapeutic systems are assimilated materials from embryonic stem cells to aggregate multiple functions of the vessels nature de-cellular structure for replenishment. While, the interior actuators deploy base-pair complementarity of nucleotides for the symmetric arrangement in particular bacterial nanonetworks of the sequence cycle creating double-stranded DNA strings. The DNA strands must be sequenced, assembled, and decoded in order to reconstruct the original source reliably. The design of exterior actuators have the ability in sensing different variations in the corresponding patterns regarding beat-to-beat heart rate variability (HRV) for spatial autocorrelation of molecular communication, which consists of human electromagnetic, piezoelectric, electrostatic and electrothermal energy to monitor and transfer the dynamic changes of all the cantilevers simultaneously in real-time workspace with high precision. A prototype-enabled dynamic energy sensor has been investigated in the laboratory for inclusion of nanoscale devices in the architecture with a fuzzy logic control for detection of thermal and electrostatic changes with optoelectronic devices to interpret uncertainty associated with signal interference. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other and forms its unique spatial structure modules for providing the environment mutual contribution in the investigation of mass temperature changes due to pathogenic archival architecture of clusters.

Keywords: autopoiesis, nanoparticles, quantum photonics, portable energy, photonic structure, photodynamic therapeutic system

Procedia PDF Downloads 98
269 Hydraulic Performance of Curtain Wall Breakwaters Based on Improved Moving Particle Semi-Implicit Method

Authors: Iddy Iddy, Qin Jiang, Changkuan Zhang

Abstract:

This paper addresses the hydraulic performance of curtain wall breakwaters as a coastal structure protection based on the particles method modelling. The hydraulic functions of curtain wall as wave barriers by reflecting large parts of incident waves through the vertical wall, a part transmitted and a particular part was dissipating the wave energies through the eddy flows formed beneath the lower end of the plate. As a Lagrangian particle, the Moving Particle Semi-implicit (MPS) method which has a robust capability for numerical representation has proven useful for design of structures application that concern free-surface hydrodynamic flow, such as wave breaking and overtopping. In this study, a vertical two-dimensional numerical model for the simulation of violent flow associated with the interaction between the curtain-wall breakwaters and progressive water waves is developed by MPS method in which a higher precision pressure gradient model and free surface particle recognition model were proposed. The wave transmission, reflection, and energy dissipation of the vertical wall were experimentally and theoretically examined. With the numerical wave flume by particle method, very detailed velocity and pressure fields around the curtain-walls under the action of waves can be computed in each calculation steps, and the effect of different wave and structural parameters on the hydrodynamic characteristics was investigated. Also, the simulated results of temporal profiles and distributions of velocity and pressure in the vicinity of curtain-wall breakwaters are compared with the experimental data. Herein, the numerical investigation of hydraulic performance of curtain wall breakwaters indicated that the incident wave is largely reflected from the structure, while the large eddies or turbulent flows occur beneath the curtain-wall resulting in big energy losses. The improved MPS method shows a good agreement between numerical results and analytical/experimental data which are compared to related researches. It is thus verified that the improved pressure gradient model and free surface particle recognition methods are useful for enhancement of stability and accuracy of MPS model for water waves and marine structures. Therefore, it is possible for particle method (MPS method) to achieve an appropriate level of correctness to be applied in engineering fields through further study.

Keywords: curtain wall breakwaters, free surface flow, hydraulic performance, improved MPS method

Procedia PDF Downloads 129
268 The Phenomenology in the Music of Debussy through Inspiration of Western and Oriental Culture

Authors: Yu-Shun Elisa Pong

Abstract:

Music aesthetics related to phenomenology is rarely discussed and still in the ascendant while multi-dimensional discourses of philosophy were emerged to be an important trend in the 20th century. In the present study, a basic theory of phenomenology from Edmund Husserl (1859-1938) is revealed and discussed followed by the introduction of intentionality concepts, eidetic reduction, horizon, world, and inter-subjectivity issues. Further, phenomenology of music and general art was brought to attention by the introduction of Roman Ingarden’s The Work of Music and the Problems of its Identity (1933) and Mikel Dufrenne’s The Phenomenology of Aesthetic Experience (1953). Finally, Debussy’s music will be analyzed and discussed from the perspective of phenomenology. Phenomenology is not so much a methodology or analytics rather than a common belief. That is, as much as possible to describe in detail the different human experience, relative to the object of purpose. Such idea has been practiced in various guises for centuries, only till the early 20th century Phenomenology was better refined through the works of Husserl, Heidegger, Sartre, Merleau-Ponty and others. Debussy was born in an age when the Western society began to accept the multi-cultural baptism. With his unusual sensitivity to the oriental culture, Debussy has presented considerable inspiration, absorption, and echo in his music works. In fact, his relationship with nature is far from echoing the idea of Chinese ancient literati and nature. Although he is not the first composer to associate music with human and nature, the unique quality and impact of his works enable him to become a significant figure in music aesthetics. Debussy’s music tried to develop a quality analogous of nature, and more importantly, based on vivid life experience and artistic transformation to achieve the realm of pure art. Such idea that life experience comes before artwork, either clear or vague, simple or complex, was later presented abstractly in his late works is still an interesting subject worth further discussion. Debussy’s music has existed for more than or close to a century. It has received musicology researcher’s attention as much as other important works in the history of Western music. Among the pluralistic discussion about Debussy’s art and ideas, phenomenological aesthetics has enlightened new ideas and view angles to relook his great works and even gave some previous arguments legitimacy. Overall, this article provides a new insight of Debussy’s music from phenomenological exploration and it is believed phenomenology would be an important pathway in the research of the music aesthetics.

Keywords: Debussy's music, music esthetics, oriental culture, phenomenology

Procedia PDF Downloads 239
267 Simulation and Characterization of Stretching and Folding in Microchannel Electrokinetic Flows

Authors: Justo Rodriguez, Daming Chen, Amador M. Guzman

Abstract:

The detection, treatment, and control of rapidly propagating, deadly viruses such as COVID-19, require the development of inexpensive, fast, and accurate devices to address the urgent needs of the population. Microfluidics-based sensors are amongst the different methods and techniques for detection that are easy to use. A micro analyzer is defined as a microfluidics-based sensor, composed of a network of microchannels with varying functions. Given their size, portability, and accuracy, they are proving to be more effective and convenient than other solutions. A micro analyzer based on the concept of “Lab on a Chip” presents advantages concerning other non-micro devices due to its smaller size, and it is having a better ratio between useful area and volume. The integration of multiple processes in a single microdevice reduces both the number of necessary samples and the analysis time, leading the next generation of analyzers for the health-sciences. In some applications, the flow of solution within the microchannels is originated by a pressure gradient, which can produce adverse effects on biological samples. A more efficient and less dangerous way of controlling the flow in a microchannel-based analyzer is applying an electric field to induce the fluid motion and either enhance or suppress the mixing process. Electrokinetic flows are characterized by no less than two non-dimensional parameters: the electric Rayleigh number and its geometrical aspect ratio. In this research, stable and unstable flows have been studied numerically (and when possible, will be experimental) in a T-shaped microchannel. Additionally, unstable electrokinetic flows for Rayleigh numbers higher than critical have been characterized. The flow mixing enhancement was quantified in relation to the stretching and folding that fluid particles undergo when they are subjected to supercritical electrokinetic flows. Computational simulations were carried out using a finite element-based program while working with the flow mixing concepts developed by Gollub and collaborators. Hundreds of seeded massless particles were tracked along the microchannel from the entrance to exit for both stable and unstable flows. After post-processing, their trajectories, the folding and stretching values for the different flows were found. Numerical results show that for supercritical electrokinetic flows, the enhancement effects of the folding and stretching processes become more apparent. Consequently, there is an improvement in the mixing process, ultimately leading to a more homogenous mixture.

Keywords: microchannel, stretching and folding, electro kinetic flow mixing, micro-analyzer

Procedia PDF Downloads 102
266 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design

Authors: Emiliano Matta

Abstract:

Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.

Keywords: amplitude-independent damping, homogeneous friction, pendulum nonlinear dynamics, structural control, vibration resonant absorbers

Procedia PDF Downloads 120
265 Web-Based Instructional Program to Improve Professional Development: Recommendations and Standards for Radioactive Facilities in Brazil

Authors: Denise Levy, Gian M. A. A. Sordi

Abstract:

This web based project focuses on continuing corporate education and improving workers' skills in Brazilian radioactive facilities throughout the country. The potential of Information and Communication Technologies (ICTs) shall contribute to improve the global communication in this very large country, where it is a strong challenge to ensure high quality professional information to as many people as possible. The main objective of this system is to provide Brazilian radioactive facilities a complete web-based repository - in Portuguese - for research, consultation and information, offering conditions for learning and improving professional and personal skills. UNIPRORAD is a web based system to offer unified programs and inter-related information about radiological protection programs. The content includes the best practices for radioactive facilities in order to meet both national standards and international recommendations published by different organizations over the past decades: International Commission on Radiological Protection (ICRP), International Atomic Energy Agency (IAEA) and National Nuclear Energy Commission (CNEN). The website counts on concepts, definitions and theory about optimization and ionizing radiation monitoring procedures. Moreover, the content presents further discussions related to some national and international recommendations, such as potential exposure, which is currently one of the most important research fields in radiological protection. Only two publications of ICRP develop expressively the issue and there is still a lack of knowledge of fail probabilities, for there are still uncertainties to find effective paths to quantify probabilistically the occurrence of potential exposures and the probabilities to reach a certain level of dose. To respond to this challenge, this project discusses and introduces potential exposures in a more quantitative way than national and international recommendations. Articulating ICRP and AIEA valid recommendations and official reports, in addition to scientific papers published in major international congresses, the website discusses and suggests a number of effective actions towards safety which can be incorporated into labor practice. The WEB platform was created according to corporate public needs, taking into account the development of a robust but flexible system, which can be easily adapted to future demands. ICTs provide a vast array of new communication capabilities and allow to spread information to as many people as possible at low costs and high quality communication. This initiative shall provide opportunities for employees to increase professional skills, stimulating development in this large country where it is an enormous challenge to ensure effective and updated information to geographically distant facilities, minimizing costs and optimizing results.

Keywords: distance learning, information and communication technology, nuclear science, radioactive facilities

Procedia PDF Downloads 173
264 Anti-Gravity to Neo-Concretism: The Epodic Spaces of Non-Objective Art

Authors: Alexandra Kennedy

Abstract:

Making use of the notion of ‘epodic spaces’ this paper presents a reconsideration of non-objective art practices, proposing alternatives to established materialist, formalist, process-based conceptualist approaches to such work. In his Neo-Concrete Manifesto (1959) Ferreira Gullar (1930-2016) sought to create a distinction between various forms of non-objective art. He distinguished the ‘geometric’ arts of neoplasticism, constructivism, and suprematism – which he described as ‘dangerously acute rationalism’ – from other non-objective practices. These alternatives, he proposed, have an expressive potential lacking in the former and this formed the basis for their categorisation as neo-concrete. Gullar prioritized the phenomenological over the rational, with an emphasis on the role of the spectator (a key concept of minimalism). Gullar highlighted the central role of sensual experience, colour and the poetic in such work. In the early twentieth century, Russian Cosmism – an esoteric philosophical movement – was highly influential on Russian avant-garde artists and can account for suprematist artists’ interest in, and approach to, planar geometry and four-dimensional space as demonstrated in the abstract paintings of Kasimir Malevich (1879-1935). Nikolai Fyodorov (1823-1903) promoted the idea of anti-gravity and cosmic space as the field for artistic activity. The artist and writer Kuzma Petrov-Vodkin (1878-1939) wrote on the concept of Euclidean space, the overcoming of such rational conceptions of space and the breaking free from the gravitational field and the earth’s sphere. These imaginary spaces, which also invoke a bodily experience, present a poetic dimension to the work of the suprematists. It is a dimension that arguably aligns more with Gullar’s formulation of his neo-concrete rather than that of his alignment of Suprematism with rationalism. While found in experiments with planar geometry, the interest in forms suggestive of an experience of breaking free–both physically from the earth and conceptually from rational, mathematical space (in a pre-occupation with non-Euclidean space and anti-geometry) and in their engagement with the spatial properties of colour, Suprematism presents itself as imaginatively epodic. The paper discusses both historical and contemporary non-objective practices in this context, drawing attention to the manner in which the category of the non-objective is used to categorise art works which are, arguably, qualitatively different.

Keywords: anti-gravity, neo-concrete, non-Euclidian geometry, non-objective painting

Procedia PDF Downloads 150
263 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques

Authors: Imed Feki, Faouzi Msahli

Abstract:

Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.

Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique

Procedia PDF Downloads 576
262 Festival Gamification: Conceptualization and Scale Development

Authors: Liu Chyong-Ru, Wang Yao-Chin, Huang Wen-Shiung, Tang Wan-Ching

Abstract:

Although gamification has been concerned and applied in the tourism industry, limited literature could be found in tourism academy. Therefore, to contribute knowledge in festival gamification, it becomes essential to start by establishing a Festival Gamification Scale (FGS). This study defines festival gamification as the extent of a festival to involve game elements and game mechanisms. Based on self-determination theory, this study developed an FGS. Through the multi-study method, in study one, five FGS dimensions were sorted through literature review, followed by twelve in-depth interviews. A total of 296 statements were extracted from interviews and were later narrowed down to 33 items under six dimensions. In study two, 226 survey responses were collected from a cycling festival for exploratory factor analysis, resulting in twenty items under five dimensions. In study three, 253 survey responses were obtained from a marathon festival for confirmatory factor analysis, resulting in the final sixteen items under five dimensions. Then, results of criterion-related validity confirmed the positive effects of these five dimensions on flow experience. In study four, for examining the model extension of the developed five-dimensional 16-item FGS, which includes dimensions of relatedness, mastery, competence, fun, and narratives, cross-validation analysis was performed using 219 survey responses from a religious festival. For the tourism academy, the FGS could further be applied in other sub-fields such as destinations, theme parks, cruise trips, or resorts. The FGS serves as a starting point for examining the mechanism of festival gamification in changing tourists’ attitudes and behaviors. Future studies could work on follow-up studies of FGS by testing outcomes of festival gamification or examining moderating effects of enhancing outcomes of festival gamification. On the other hand, although the FGS has been tested in cycling, marathon, and religious festivals, the research settings are all in Taiwan. Cultural differences of FGS is another further direction for contributing knowledge in festival gamification. This study also contributes to several valuable practical implications. First, this FGS could be utilized in tourist surveys for evaluating the extent of gamification of a festival. Based on the results of the performance assessment by FGS, festival management organizations and festival planners could learn the relative scores among dimensions of FGS, and plan for future improvement of gamifying the festival. Second, the FGS could be applied in positioning a gamified festival. Festival management organizations and festival planners could firstly consider the features and types of their festival, and then gamify their festival based on investing resources in key FGS dimensions.

Keywords: festival gamification, festival tourism, scale development, self-determination theory

Procedia PDF Downloads 129
261 Nanoimprinted-Block Copolymer-Based Porous Nanocone Substrate for SERS Enhancement

Authors: Yunha Ryu, Kyoungsik Kim

Abstract:

Raman spectroscopy is one of the most powerful techniques for chemical detection, but the low sensitivity originated from the extremely small cross-section of the Raman scattering limits the practical use of Raman spectroscopy. To overcome this problem, Surface Enhanced Raman Scattering (SERS) has been intensively studied for several decades. Because the SERS effect is mainly induced from strong electromagnetic near-field enhancement as a result of localized surface plasmon resonance of metallic nanostructures, it is important to design the plasmonic structures with high density of electromagnetic hot spots for SERS substrate. One of the useful fabrication methods is using porous nanomaterial as a template for metallic structure. Internal pores on a scale of tens of nanometers can be strong EM hotspots by confining the incident light. Also, porous structures can capture more target molecules than non-porous structures in a same detection spot thanks to the large surface area. Herein we report the facile fabrication method of porous SERS substrate by integrating solvent-assisted nanoimprint lithography and selective etching of block copolymer. We obtained nanostructures with high porosity via simple selective etching of the one microdomain of the diblock copolymer. Furthermore, we imprinted of the nanocone patterns into the spin-coated flat block copolymer film to make three-dimensional SERS substrate for the high density of SERS hot spots as well as large surface area. We used solvent-assisted nanoimprint lithography (SAIL) to reduce the fabrication time and cost for patterning BCP film by taking advantage of a solvent which dissolves both polystyrenre and poly(methyl methacrylate) domain of the block copolymer, and thus block copolymer film was molded under the low temperature and atmospheric pressure in a short time. After Ag deposition, we measured Raman intensity of dye molecules adsorbed on the fabricated structure. Compared to the Raman signals of Ag coated solid nanocone, porous nanocone showed 10 times higher Raman intensity at 1510 cm(-1) band. In conclusion, we fabricated porous metallic nanocone arrays with high density electromagnetic hotspots by templating nanoimprinted diblock copolymer with selective etching and demonstrated its capability as an effective SERS substrate.

Keywords: block copolymer, porous nanostructure, solvent-assisted nanoimprint, surface-enhanced Raman spectroscopy

Procedia PDF Downloads 598