Search results for: contention resolution
1142 Assessment of Environmental Quality of an Urban Setting
Authors: Namrata Khatri
Abstract:
The rapid growth of cities is transforming the urban environment and posing significant challenges for environmental quality. This study examines the urban environment of Belagavi in Karnataka, India, using geostatistical methods to assess the spatial pattern and land use distribution of the city and to evaluate the quality of the urban environment. The study is driven by the necessity to assess the environmental impact of urbanisation. Satellite data was utilised to derive information on land use and land cover. The investigation revealed that land use had changed significantly over time, with a drop in plant cover and an increase in built-up areas. High-resolution satellite data was also utilised to map the city's open areas and gardens. GIS-based research was used to assess public green space accessibility and to identify regions with inadequate waste management practises. The findings revealed that garbage collection and disposal techniques in specific areas of the city needed to be improved. Moreover, the study evaluated the city's thermal environment using Landsat 8 land surface temperature (LST) data. The investigation found that built-up regions had higher LST values than green areas, pointing to the city's urban heat island (UHI) impact. The study's conclusions have far-reaching ramifications for urban planners and politicians in Belgaum and other similar cities. The findings may be utilised to create sustainable urban planning strategies that address the environmental effect of urbanisation while also improving the quality of life for city dwellers. Satellite data and high-resolution satellite pictures were gathered for the study, and remote sensing and GIS tools were utilised to process and analyse the data. Ground truthing surveys were also carried out to confirm the accuracy of the remote sensing and GIS-based data. Overall, this study provides a complete assessment of Belgaum's environmental quality and emphasizes the potential of remote sensing and geographic information systems (GIS) approaches in environmental assessment and management.Keywords: environmental quality, UEQ, remote sensing, GIS
Procedia PDF Downloads 801141 Memory Retrieval and Implicit Prosody during Reading: Anaphora Resolution by L1 and L2 Speakers of English
Authors: Duong Thuy Nguyen, Giulia Bencini
Abstract:
The present study examined structural and prosodic factors on the computation of antecedent-reflexive relationships and sentence comprehension in native English (L1) and Vietnamese-English bilinguals (L2). Participants read sentences presented on the computer screen in one of three presentation formats aimed at manipulating prosodic parsing: word-by-word (RSVP), phrase-segment (self-paced), or whole-sentence (self-paced), then completed a grammaticality rating and a comprehension task (following Pratt & Fernandez, 2016). The design crossed three factors: syntactic structure (simple; complex), grammaticality (target-match; target-mismatch) and presentation format. An example item is provided in (1): (1) The actress that (Mary/John) interviewed at the awards ceremony (about two years ago/organized outside the theater) described (herself/himself) as an extreme workaholic). Results showed that overall, both L1 and L2 speakers made use of a good-enough processing strategy at the expense of more detailed syntactic analyses. L1 and L2 speakers’ comprehension and grammaticality judgements were negatively affected by the most prosodically disrupting condition (word-by-word). However, the two groups demonstrated differences in their performance in the other two reading conditions. For L1 speakers, the whole-sentence and the phrase-segment formats were both facilitative in the grammaticality rating and comprehension tasks; for L2, compared with the whole-sentence condition, the phrase-segment paradigm did not significantly improve accuracy or comprehension. These findings are consistent with the findings of Pratt & Fernandez (2016), who found a similar pattern of results in the processing of subject-verb agreement relations using the same experimental paradigm and prosodic manipulation with English L1 and L2 English-Spanish speakers. The results provide further support for a Good-Enough cue model of sentence processing that integrates cue-based retrieval and implicit prosodic parsing (Pratt & Fernandez, 2016) and highlights similarities and differences between L1 and L2 sentence processing and comprehension.Keywords: anaphora resolution, bilingualism, implicit prosody, sentence processing
Procedia PDF Downloads 1521140 A Kierkegaardian Reading of Iqbal's Poetry as a Communicative Act
Authors: Sevcan Ozturk
Abstract:
The overall aim of this paper is to present a Kierkegaardian approach to Iqbal’s use of literature as a form of communication. Despite belonging to different historical, cultural, and religious backgrounds, the philosophical approaches of Soren Kierkegaard, ‘the father of existentialism,' and Muhammad Iqbal ‘the spiritual father of Pakistan’ present certain parallels. Both Kierkegaard and Iqbal take human existence as the starting point for their reflections, emphasise the subject of becoming genuine religious personalities, and develop a notion of the self. While doing these they both adopt parallel methods, employ literary techniques and poetical forms, and use their literary works as a form of communication. The problem is that Iqbal does not provide a clear account of his method as Kierkegaard does in his works. As a result, Iqbal’s literary approach appears to be a collection of contradictions. This is mainly because despite he writes most of his works in the poetical form, he condemns all kinds of art including poetry. Moreover, while attacking on Islamic mysticism, he, at the same time, uses classical literary forms, and a number of traditional mystical, poetic symbols. This paper will argue that the contradictions found in Iqbal’s approach are actually a significant part of Iqbal’s way of communicating his reader. It is the contention of this paper that with the help of the parallels between the literary and philosophical theories of Kierkegaard and Iqbal, the application of Kierkegaard’s method to Iqbal’s use of poetry as a communicative act will make it possible to dispel the seeming ambiguities in Iqbal’s literary approach. The application of Kierkegaard’s theory to Iqbal’s literary method will include an analysis of the main principles of Kierkegaard’s own literary technique of ‘indirect communication,' which is a crucial term of his existentialist philosophy. Second, the clash between what Iqbal’s says about art and poetry and what he does will be highlighted in the light of Kierkegaardian theory of indirect communication. It will be argued that Iqbal’s literary technique can be considered as a form of ‘indirect communication,' and that reading his technique in this way helps on dispelling the contradictions in his approach. It is hoped that this paper will cultivate a dialogue between those who work in the fields of comparative philosophy Kierkegaard studies, existentialism, contemporary Islamic thought, Iqbal studies, and literary criticism.Keywords: comparative philosophy, existentialism, indirect communication, intercultural philosophy, literary communication, Muhammad Iqbal, Soren Kierkegaard
Procedia PDF Downloads 3331139 Detection of Temporal Change of Fishery and Island Activities by DNB and SAR on the South China Sea
Authors: I. Asanuma, T. Yamaguchi, J. Park, K. J. Mackin
Abstract:
Fishery lights on the surface could be detected by the Day and Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi-NPP). The DNB covers the spectral range of 500 to 900 nm and realized a higher sensitivity. The DNB has a difficulty of identification of fishing lights from lunar lights reflected by clouds, which affects observations for the half of the month. Fishery lights and lights of the surface are identified from lunar lights reflected by clouds by a method using the DNB and the infrared band, where the detection limits are defined as a function of the brightness temperature with a difference from the maximum temperature for each level of DNB radiance and with the contrast of DNB radiance against the background radiance. Fishery boats or structures on islands could be detected by the Synthetic Aperture Radar (SAR) on the polar orbit satellites using the reflected microwave by the surface reflecting targets. The SAR has a difficulty of tradeoff between spatial resolution and coverage while detecting the small targets like fishery boats. A distribution of fishery boats and island activities were detected by the scan-SAR narrow mode of Radarsat-2, which covers 300 km by 300 km with various combinations of polarizations. The fishing boats were detected as a single pixel of highly scattering targets with the scan-SAR narrow mode of which spatial resolution is 30 m. As the look angle dependent scattering signals exhibits the significant differences, the standard deviations of scattered signals for each look angles were taken into account as a threshold to identify the signal from fishing boats and structures on the island from background noise. It was difficult to validate the detected targets by DNB with SAR data because of time lag of observations for 6 hours between midnight by DNB and morning or evening by SAR. The temporal changes of island activities were detected as a change of mean intensity of DNB for circular area for a certain scale of activities. The increase of DNB mean intensity was corresponding to the beginning of dredging and the change of intensity indicated the ending of reclamation and following constructions of facilities.Keywords: day night band, SAR, fishery, South China Sea
Procedia PDF Downloads 2351138 3D Label-Free Bioimaging of Native Tissue with Selective Plane Illumination Optical Microscopy
Authors: Jing Zhang, Yvonne Reinwald, Nick Poulson, Alicia El Haj, Chung See, Mike Somekh, Melissa Mather
Abstract:
Biomedical imaging of native tissue using light offers the potential to obtain excellent structural and functional information in a non-invasive manner with good temporal resolution. Image contrast can be derived from intrinsic absorption, fluorescence, or scatter, or through the use of extrinsic contrast. A major challenge in applying optical microscopy to in vivo tissue imaging is the effects of light attenuation which limits light penetration depth and achievable imaging resolution. Recently Selective Plane Illumination Microscopy (SPIM) has been used to map the 3D distribution of fluorophores dispersed in biological structures. In this approach, a focused sheet of light is used to illuminate the sample from the side to excite fluorophores within the sample of interest. Images are formed based on detection of fluorescence emission orthogonal to the illumination axis. By scanning the sample along the detection axis and acquiring a stack of images, 3D volumes can be obtained. The combination of rapid image acquisition speeds with the low photon dose to samples optical sectioning provides SPIM is an attractive approach for imaging biological samples in 3D. To date all implementations of SPIM rely on the use of fluorescence reporters be that endogenous or exogenous. This approach has the disadvantage that in the case of exogenous probes the specimens are altered from their native stage rendering them unsuitable for in vivo studies and in general fluorescence emission is weak and transient. Here we present for the first time to our knowledge a label-free implementation of SPIM that has downstream applications in the clinical setting. The experimental set up used in this work incorporates both label-free and fluorescent illumination arms in addition to a high specification camera that can be partitioned for simultaneous imaging of both fluorescent emission and scattered light from intrinsic sources of optical contrast in the sample being studied. This work first involved calibration of the imaging system and validation of the label-free method with well characterised fluorescent microbeads embedded in agarose gel. 3D constructs of mammalian cells cultured in agarose gel with varying cell concentrations were then imaged. A time course study to track cell proliferation in the 3D construct was also carried out and finally a native tissue sample was imaged. For each sample multiple images were obtained by scanning the sample along the axis of detection and 3D maps reconstructed. The results obtained validated label-free SPIM as a viable approach for imaging cells in a 3D gel construct and native tissue. This technique has the potential use in a near-patient environment that can provide results quickly and be implemented in an easy to use manner to provide more information with improved spatial resolution and depth penetration than current approaches.Keywords: bioimaging, optics, selective plane illumination microscopy, tissue imaging
Procedia PDF Downloads 2471137 Cross-Cultural Conflict Management in Transnational Business Relationships: A Qualitative Study with Top Executives in Chinese, German and Middle Eastern Cases
Authors: Sandra Hartl, Meena Chavan
Abstract:
This paper presents the outcome of a four year Ph.D. research on cross-cultural conflict management in transnational business relationships. An important and complex problem about managing conflicts that arise across cultures in business relationships is investigated, and conflict resolution strategies are identified. This paper particularly focuses on transnational relationships within a Chinese, German and Middle Eastern framework. Unlike many papers on this issue which have been built on experiments with international MBA students, this research provides real-life cases of cross-cultural conflicts which are not easy to capture. Its uniqueness is underpinned as the real case data was gathered by interviewing top executives at management positions in large multinational corporations through a qualitative case study method approach. This paper makes a valuable contribution to the theory of cross-cultural conflicts, and despite the sensitivity, this research primarily presents real-time business data about breaches of contracts between two counterparties engaged in transnational operating organizations. The overarching aim of this research is to identify the degree of significance for the cultural factors and the communication factors embedded in cross-cultural business conflicts. It questions from a cultural perspective what factors lead to the conflicts in each of the cases, what the causes are and the role of culture in identifying effective strategies for resolving international disputes in an increasingly globalized business world. The results of 20 face to face interviews are outlined, which were conducted, recorded, transcribed and then analyzed using the NVIVO qualitative data analysis system. The outcomes make evident that the factors leading to conflicts are broadly organized under seven themes, which are communication, cultural difference, environmental issues, work structures, knowledge and skills, cultural anxiety and personal characteristics. When evaluating the causes of the conflict it is to notice that these are rather multidimensional. Irrespective of the conflict types (relationship or task-based conflict or due to individual personal differences), relationships are almost always an element of all conflicts. Cultural differences, which are a critical factor for conflicts, result from different cultures placing different levels of importance on relationships. Communication issues which are another cause of conflict also reflect different relationships styles favored by different cultures. In identifying effective strategies for solving cross-cultural business conflicts this research identifies that solutions need to consider the national cultures (country specific characteristics), organizational cultures and individual culture, of the persons engaged in the conflict and how these are interlinked to each other. Outcomes identify practical dispute resolution strategies to resolve cross-cultural business conflicts in reference to communication, empathy and training to improve cultural understanding and cultural competence, through the use of mediation. To conclude, the findings of this research will not only add value to academic knowledge of cross-cultural conflict management across transnational businesses but will also add value to numerous cross-border business relationships worldwide. Above all it identifies the influence of cultures and communication and cross-cultural competence in reducing cross-cultural business conflicts in transnational business.Keywords: business conflict, conflict management, cross-cultural communication, dispute resolution
Procedia PDF Downloads 1621136 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 1091135 Irish Film Tourism, Neocolonialism and Star Wars: Charting a Course Towards Ecologically and Culturally Considered Representation and Tourism on Skellig Michael
Authors: Rachel Gough
Abstract:
In 2014, Skellig Michael, an island off Ireland’s western seaboard and UNESCO world heritage site became a major setting in Disney’s Star Wars franchise. The subsequent influx of tourists to the site has proven to be a point of contention nationally. The increased visitor numbers have uplifted certain areas of the local economy, the mainland, but have caused irreparable damage to historic monuments and to endangered bird populations who breed on the island. Recent research carried out by a state body suggests far-reaching and longterm negative impacts on the island’s culture and environment, should the association with the Star Wars franchise persist. In spite of this, the film has been widely endorsed by the Irish government as providing a vital economic boost to historically marginalised rural areas through film tourism. This paper argues quite plainly that what is taking place on Skellig is neocolonialism. Skellig Michael’s unique resources, its aesthetic qualities, its ecosystem, and its cultural currency have been sold by the state to a multinational corporation, who profit from their use. Meanwhile, locals are left to do their best to turn a market trend into sustainable business at the expense of culture ecology and community. This paper intends to be the first dedicated study into the psychogeographic and cultural impact of Skellig Michael’s deterioration as a result of film tourism. It will discuss the projected impact of this incident on Irish culture more broadly and finally will attempt to lay out a roadmap for more collaborative filmmaking and touristic approach, which allows local cultures and ecosystem’s to thrive without drastically inhibiting cultural production. This paper will ultimately find that the consequences of this representation call for a requirement to read tourism as a split concept — namely into what we might loosely call “eco-tourism” and more capital-based “profit-bottom-line tourism.”Keywords: ecology, film tourism, neocolonialism, sustainability
Procedia PDF Downloads 2061134 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis
Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera
Abstract:
Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.Keywords: log-linear model, multi spectral, residuals, spatial error model
Procedia PDF Downloads 2971133 Zinc Oxide Nanoparticle-Doped Poly (8-Anilino-1-Napthalene Sulphonic Acid/Nat Nanobiosensors for TB Drugs
Authors: Rachel Fanelwa Ajayi, Anovuyo Jonnas, Emmanuel I. Iwuoha
Abstract:
Tuberculosis (TB) is an infectious disease caused by the bacterium (Mycobacterium tuberculosis) which has a predilection for lung tissue due to its rich oxygen supply. The mycobacterial cell has a unique innate characteristic which allows it to resist human immune systems and drug treatments; hence, it is one of the most difficult of all bacterial infections to treat, let alone to cure. At the same time, multi-drug resistance TB (MDR-TB) caused by poorly managed TB treatment, is a growing problem and requires the administration of expensive and less effective second line drugs which take much longer treatment duration than fist line drugs. Therefore, to acknowledge the issues of patients falling ill as a result of inappropriate dosing of treatment and inadequate treatment administration, a device with a fast response time coupled with enhanced performance and increased sensitivity is essential. This study involved the synthesis of electroactive platforms for application in the development of nano-biosensors suitable for the appropriate dosing of clinically diagnosed patients by promptly quantifying the levels of the TB drug; Isonaizid. These nano-biosensors systems were developed on gold surfaces using the enzyme N-acetyletransferase 2 coupled to the cysteamine modified poly(8-anilino-1-napthalene sulphonic acid)/zinc oxide nanocomposites. The morphology of ZnO nanoparticles, PANSA/ZnO nano-composite and nano-biosensors platforms were characterized using High-Resolution Transmission Electron Microscopy (HRTEM) and High-Resolution Scanning Electron Microscopy (HRSEM). On the other hand, the elemental composition of the developed nanocomposites and nano-biosensors were studied using Fourier Transform Infra-Red Spectroscopy (FTIR) and Energy Dispersive X-Ray (EDX). The electrochemical studies showed an increase in electron conductivity for the PANSA/ZnO nanocomposite which was an indication that it was suitable as a platform towards biosensor development.Keywords: N-acetyletransferase 2, isonaizid, tuberculosis, zinc oxide
Procedia PDF Downloads 3731132 Structural Stress of Hegemon’s Power Loss: A Pestle Analysis for Pacification and Security Policy Plan
Authors: Sehrish Qayyum
Abstract:
Active military power contention is shifting to economic and cyberwar to retain hegemony. Attuned Pestle analysis confirms that structural stress of hegemon’s power loss drives a containment approach towards caging actions. Ongoing diplomatic, asymmetric, proxy and direct wars are increasing stress hegemon’s power retention due to tangled military and economic alliances. It creates the condition of catalepsy with defective reflexive control which affects the core warfare operations. When one’s own power is doubted it gives power to one’s own doubt to ruin all planning either done with superlative cost-benefit analysis. Strategically calculated estimation of Hegemon’s power game since the early WWI to WWII, WWII-to Cold War and then to the current era in three chronological periods exposits that Thucydides’s trap became the reason for war broke out. Thirst for power is the demise of imagination and cooperation for better sense to prevail instead it drives ashes to dust. Pestle analysis is a wide array of evaluation from political and economic to legal dimensions of the state matters. It helps to develop the Pacification and Security Policy Plan (PSPP) to avoid hegemon’s structural stress of power loss in fact, in turn, creates an alliance with maximum amicable outputs. PSPP may serve to regulate and pause the hurricane of power clashes. PSPP along with a strategic work plan is based on Pestle analysis to deal with any conceivable war condition and approach for saving international peace. Getting tangled into self-imposed epistemic dilemmas results in regret that becomes the only option of performance. It is a generic application of probability tests to find the best possible options and conditions to develop PSPP for any adversity possible so far. Innovation in expertise begets innovation in planning and action-plan to serve as a rheostat approach to deal with any plausible power clash.Keywords: alliance, hegemon, pestle analysis, pacification and security policy plan, security
Procedia PDF Downloads 1061131 Detection of Triclosan in Water Based on Nanostructured Thin Films
Authors: G. Magalhães-Mota, C. Magro, S. Sério, E. Mateus, P. A. Ribeiro, A. B. Ribeiro, M. Raposo
Abstract:
Triclosan [5-chloro-2-(2,4-dichlorophenoxy) phenol], belonging to the class of Pharmaceuticals and Personal Care Products (PPCPs), is a broad-spectrum antimicrobial agent and bactericide. Because of its antimicrobial efficacy, it is widely used in personal health and skin care products, such as soaps, detergents, hand cleansers, cosmetics, toothpastes, etc. However, it has been considered to disrupt the endocrine system, for instance, thyroid hormone homeostasis and possibly the reproductive system. Considering the widespread use of triclosan, it is expected that environmental and food safety problems regarding triclosan will increase dramatically. Triclosan has been found in river water samples in both North America and Europe and is likely widely distributed wherever triclosan-containing products are used. Although significant amounts are removed in sewage plants, considerable quantities remain in the sewage effluent, initiating widespread environmental contamination. Triclosan undergoes bioconversion to methyl-triclosan, which has been demonstrated to bio accumulate in fish. In addition, triclosan has been found in human urine samples from persons with no known industrial exposure and in significant amounts in samples of mother's milk, demonstrating its presence in humans. The action of sunlight in river water is known to turn triclosan into dioxin derivatives and raises the possibility of pharmacological dangers not envisioned when the compound was originally utilized. The aim of this work is to detect low concentrations of triclosan in an aqueous complex matrix through the use of a sensor array system, following the electronic tongue concept based on impedance spectroscopy. To achieve this goal, we selected the appropriate molecules to the sensor so that there is a high affinity for triclosan and whose sensitivity ensures the detection of concentrations of at least nano-molar. Thin films of organic molecules and oxides have been produced by the layer-by-layer (LbL) technique and sputtered onto glass solid supports already covered by gold interdigitated electrodes. By submerging the films in complex aqueous solutions with different concentrations of triclosan, resistance and capacitance values were obtained at different frequencies. The preliminary results showed that an array of interdigitated electrodes sensor coated or uncoated with different LbL and films, can be used to detect TCS traces in aqueous solutions in a wide range concentration, from 10⁻¹² to 10⁻⁶ M. The PCA method was applied to the measured data, in order to differentiate the solutions with different concentrations of TCS. Moreover, was also possible to trace a curve, the plot of the logarithm of resistance versus the logarithm of concentration, which allowed us to fit the plotted data points with a decreasing straight line with a slope of 0.022 ± 0.006 which corresponds to the best sensitivity of our sensor. To find the sensor resolution near of the smallest concentration (Cs) used, 1pM, the minimum measured value which can be measured with resolution is 0.006, so the ∆logC =0.006/0.022=0.273, and, therefore, C-Cs~0.9 pM. This leads to a sensor resolution of 0.9 pM for the smallest concentration used, 1pM. This attained detection limit is lower than the values obtained in the literature.Keywords: triclosan, layer-by-layer, impedance spectroscopy, electronic tongue
Procedia PDF Downloads 2521130 Crustal Scale Seismic Surveys in Search for Gawler Craton Iron Oxide Cu-Au (IOCG) under Very Deep Cover
Authors: E. O. Okan, A. Kepic, P. Williams
Abstract:
Iron oxide copper gold (IOCG) deposits constitute important sources of copper and gold in Australia especially since the discovery of the supergiant Olympic Dam deposits in 1975. They are considered to be metasomatic expressions of large crustal-scale alteration events occasioned by intrusive actions and are associated with felsic igneous rocks in most cases, commonly potassic igneous magmatism, with the deposits ranging from ~2.2 –1.5 Ga in age. For the past two decades, geological, geochemical and potential methods have been used to identify the structures hosting these deposits follow up by drilling. Though these methods have largely been successful for shallow targets, at deeper depth due to low resolution they are limited to mapping only very large to gigantic deposits with sufficient contrast. As the search for ore-bodies under regolith cover continues due to depletion of the near surface deposits, there is a compelling need to develop new exploration technology to explore these deep seated ore-bodies within 1-4km which is the current mining depth range. Seismic reflection method represents this new technology as it offers a distinct advantage over all other geophysical techniques because of its great depth of penetration and superior spatial resolution maintained with depth. Further, in many different geological scenarios, it offers a greater ‘3D mapability’ of units within the stratigraphic boundary. Despite these superior attributes, no arguments for crustal scale seismic surveys have been proposed because there has not been a compelling argument of economic benefit to proceed with such work. For the seismic reflection method to be used at these scales (100’s to 1000’s of square km covered) the technical risks or the survey costs have to be reduced. In addition, as most IOCG deposits have large footprint due to its association with intrusions and large fault zones; we hypothesized that these deposits can be found by mainly looking for the seismic signatures of intrusions along prospective structures. In this study, we present two of such cases: - Olympic Dam and Vulcan iron-oxide copper-gold (IOCG) deposits all located in the Gawler craton, South Australia. Results from our 2D modelling experiments revealed that seismic reflection surveys using 20m geophones and 40m shot spacing as an exploration tool for locating IOCG deposit is possible even when hosted in very complex structures. The migrated sections were not only able to identify and trace various layers plus the complex structures but also show reflections around the edges of intrusive packages. The presences of such intrusions were clearly detected from 100m to 1000m depth range without losing its resolution. The modelled seismic images match the available real seismic data and have the hypothesized characteristics; thus, the seismic method seems to be a valid exploration tool to find IOCG deposits. We therefore propose that 2D seismic survey is viable for IOCG exploration as it can detect mineralised intrusive structures along known favourable corridors. This would help in reducing the exploration risk associated with locating undiscovered resources as well as conducting a life-of-mine study which will enable better development decisions at the very beginning.Keywords: crustal scale, exploration, IOCG deposit, modelling, seismic surveys
Procedia PDF Downloads 3251129 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE
Authors: Parimalah Velo, Ahmad Zakaria
Abstract:
Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging
Procedia PDF Downloads 2711128 Clinch Process Simulation Using Diffuse Elements
Authors: Benzegaou Ali, Brani Benabderrahmane
Abstract:
This work describes a numerical study of the TOX–clinching process using diffuse elements. A computer code baptized SEMA "Static Explicit Method Analysis" is developed to simulate the clinch joining process. The FE code is based on an Updated Lagrangian scheme. The used resolution method is based on an explicit static approach. The integration of the elasto-plastic behavior law is realized with an algorithm of Simo and Taylor. The tools are represented by plane facets.Keywords: diffuse elements, numerical simulation, clinching, contact, large deformation
Procedia PDF Downloads 3631127 On the Possibility of Real Time Characterisation of Ambient Toxicity Using Multi-Wavelength Photoacoustic Instrument
Authors: Tibor Ajtai, Máté Pintér, Noémi Utry, Gergely Kiss-Albert, Andrea Palágyi, László Manczinger, Csaba Vágvölgyi, Gábor Szabó, Zoltán Bozóki
Abstract:
According to the best knowledge of the authors, here we experimentally demonstrate first, a quantified correlation between the real-time measured optical feature of the ambient and the off-line measured toxicity data. Finally, using these correlations we are presenting a novel methodology for real time characterisation of ambient toxicity based on the multi wavelength aerosol phase photoacoustic measurement. Ambient carbonaceous particulate matter is one of the most intensively studied atmospheric constituent in climate science nowadays. Beyond their climatic impact, atmospheric soot also plays an important role as an air pollutant that harms human health. Moreover, according to the latest scientific assessments ambient soot is the second most important anthropogenic emission source, while in health aspect its being one of the most harmful atmospheric constituents as well. Despite of its importance, generally accepted standard methodology for the quantitative determination of ambient toxicology is not available yet. Dominantly, ambient toxicology measurement is based on the posterior analysis of filter accumulated aerosol with limited time resolution. Most of the toxicological studies are based on operational definitions using different measurement protocols therefore the comprehensive analysis of the existing data set is really limited in many cases. The situation is further complicated by the fact that even during its relatively short residence time the physicochemical features of the aerosol can be masked significantly by the actual ambient factors. Therefore, decreasing the time resolution of the existing methodology and developing real-time methodology for air quality monitoring are really actual issues in the air pollution research. During the last decades many experimental studies have verified that there is a relation between the chemical composition and the absorption feature quantified by Absorption Angström Exponent (AAE) of the carbonaceous particulate matter. Although the scientific community are in the common platform that the PhotoAcoustic Spectroscopy (PAS) is the only methodology that can measure the light absorption by aerosol with accurate and reliable way so far, the multi-wavelength PAS which are able to selectively characterise the wavelength dependency of absorption has become only available in the last decade. In this study, the first results of the intensive measurement campaign focusing the physicochemical and toxicological characterisation of ambient particulate matter are presented. Here we demonstrate the complete microphysical characterisation of winter time urban ambient including optical absorption and scattering as well as size distribution using our recently developed state of the art multi-wavelength photoacoustic instrument (4λ-PAS), integrating nephelometer (Aurora 3000) as well as single mobility particle sizer and optical particle counter (SMPS+C). Beyond this on-line characterisation of the ambient, we also demonstrate the results of the eco-, cyto- and genotoxicity measurements of ambient aerosol based on the posterior analysis of filter accumulated aerosol with 6h time resolution. We demonstrate a diurnal variation of toxicities and AAE data deduced directly from the multi-wavelength absorption measurement results.Keywords: photoacoustic spectroscopy, absorption Angström exponent, toxicity, Ames-test
Procedia PDF Downloads 3021126 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh
Authors: Zahid Khalil, Saad Ul Haque, Asif Khan
Abstract:
Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).Keywords: Remote sensing, GIS, AHP, RWH
Procedia PDF Downloads 3891125 Bee Colony Optimization Applied to the Bin Packing Problem
Authors: Kenza Aida Amara, Bachir Djebbar
Abstract:
We treat the two-dimensional bin packing problem which involves packing a given set of rectangles into a minimum number of larger identical rectangles called bins. This combinatorial problem is NP-hard. We propose a pretreatment for the oriented version of the problem that allows the valorization of the lost areas in the bins and the reduction of the size problem. A heuristic method based on the strategy first-fit adapted to this problem is presented. We present an approach of resolution by bee colony optimization. Computational results express a comparison of the number of bins used with and without pretreatment.Keywords: bee colony optimization, bin packing, heuristic algorithm, pretreatment
Procedia PDF Downloads 6331124 A Case Report: The Role of Gut Directed Hypnotherapy in Resolution of Irritable Bowel Syndrome in a Medication Refractory Pediatric Male Patient
Authors: Alok Bapatla, Pamela Lutting, Mariastella Serrano
Abstract:
Background: Irritable Bowel Syndrome (IBS) is a functional gastrointestinal disorder characterized by abdominal pain associated with altered bowel habits in the absence of an underlying organic cause. Although the exact etiology of IBS is not fully understood, one of the leading theories postulates a pathology within the Brain-Gut Axis that leads to an overall increase in gastrointestinal sensitivity and pejorative changes in gastrointestinal motility. Research and clinical practice have shown that Gut Directed Hypnotherapy (GDH) has a beneficial clinical role in improving Mind-Gut control and thereby comorbid conditions such as anxiety, abdominal pain, constipation, and diarrhea. Aims: This study presents a 17-year old male with underlying anxiety and a one-year history of IBS-Constipation Predominant Subtype (IBS-C), who has demonstrated impressive improvement of symptoms following GDH treatment following refractory trials with medications including bisacodyl, senna, docusate, magnesium citrate, lubiprostone, linaclotide. Method: The patient was referred to a licensed clinical psychologist specializing in clinical hypnosis and cognitive-behavioral therapy (CBT), who implemented “The Standardized Hypnosis Protocol for IBS” developed by Dr. Olafur S. Palsson, Psy.D at the University of North Carolina at Chapel Hill. The hypnotherapy protocol consisted of a total of seven weekly 45-minute sessions supplemented with a 20-minute audio recording to be listened to once daily. Outcome variables included the GAD-7, PHQ-9 and DCI-2, as well as self-ratings (ranging 0-10) for pain (intensity and frequency), emotional distress about IBS symptoms, and overall emotional distress. All variables were measured at intake prior to administration of the hypnosis protocol and at the conclusion of the hypnosis treatment. A retrospective IBS Questionnaire (IBS Severity Scoring System) was also completed at the conclusion of the GDH treatment for pre-and post-test ratings of clinical symptoms. Results: The patient showed improvement in all outcome variables and self-ratings, including abdominal pain intensity, frequency of abdominal pain episodes, emotional distress relating to gut issues, depression, and anxiety. The IBS Questionnaire showed a significant improvement from a severity score of 400 (defined as severe) prior to GDH intervention compared to 55 (defined as complete resolution) at four months after the last session. IBS Questionnaire subset questions that showed a significant score improvement included abdominal pain intensity, days of pain experienced per 10 days, satisfaction with bowel habits, and overall interference of life affected by IBS symptoms. Conclusion: This case supports the existing research literature that GDH has a significantly beneficial role in improving symptoms in patients with IBS. Emphasis is placed on the numerical results of the IBS Questionnaire scoring, which reflects a patient who initially suffered from severe IBS with failed response to multiple medications, who subsequently showed full and sustained resolutionKeywords: pediatrics, constipation, irritable bowel syndrome, hypnotherapy, gut-directed hypnosis
Procedia PDF Downloads 1981123 Explicit Chain Homotopic Function to Compute Hochschild Homology of the Polynomial Algebra
Authors: Zuhier Altawallbeh
Abstract:
In this paper, an explicit homotopic function is constructed to compute the Hochschild homology of a finite dimensional free k-module V. Because the polynomial algebra is of course fundamental in the computation of the Hochschild homology HH and the cyclic homology CH of commutative algebras, we concentrate our work to compute HH of the polynomial algebra.by providing certain homotopic function.Keywords: hochschild homology, homotopic function, free and projective modules, free resolution, exterior algebra, symmetric algebra
Procedia PDF Downloads 4051122 Automatic Vertical Wicking Tester Based on Optoelectronic Techniques
Authors: Chi-Wai Kan, Kam-Hong Chau, Ho-Shing Law
Abstract:
Wicking property is important for textile finishing and wears comfort. Good wicking properties can ensure uniformity and efficiency of the textiles treatment. In view of wear comfort, quick wicking fabrics facilitate the evaporation of sweat. Therefore, the wetness sensation of the skin is minimised to prevent discomfort. The testing method for vertical wicking was standardised by the American Association of Textile Chemists and Colorists (AATCC) in 2011. The traditional vertical wicking test involves human error to observe fast changing and/or unclear wicking height. This study introduces optoelectronic devices to achieve an automatic Vertical Wicking Tester (VWT) and reduce human error. The VWT can record the wicking time and wicking height of samples. By reducing the difficulties of manual judgment, the reliability of the vertical wicking experiment is highly increased. Furthermore, labour is greatly decreased by using the VWT. The automatic measurement of the VWT has optoelectronic devices to trace the liquid wicking with a simple operation procedure. The optoelectronic devices detect the colour difference between dry and wet samples. This allows high sensitivity to a difference in irradiance down to 10 μW/cm². Therefore, the VWT is capable of testing dark fabric. The VWT gives a wicking distance (wicking height) of 1 mm resolution and a wicking time of one-second resolution. Acknowledgment: This is a research project of HKRITA funded by Innovation and Technology Fund (ITF) with title “Development of an Automatic Measuring System for Vertical Wicking” (ITP/055/20TP). Author would like to thank the financial support by ITF. Any opinions, findings, conclusions or recommendations expressed in this material/event (or by members of the project team) do not reflect the views of the Government of the Hong Kong Special Administrative Region, the Innovation and Technology Commission or the Panel of Assessors for the Innovation and Technology Support Programme of the Innovation and Technology Fund and the Hong Kong Research Institute of Textiles and Apparel. Also, we would like to thank the support and sponsorship from Lai Tak Enterprises Limited, Kingis Development Limited and Wing Yue Textile Company Limited.Keywords: AATCC method, comfort, textile measurement, wetness sensation
Procedia PDF Downloads 1011121 Optimising Transcranial Alternating Current Stimulation
Authors: Robert Lenzie
Abstract:
Transcranial electrical stimulation (tES) is significant in the research literature. However, the effects of tES on brain activity are still poorly understood at the surface level, the Brodmann Area level, and the impact on neural networks. Using a method like electroencephalography (EEG) in conjunction with tES might make it possible to comprehend the brain response and mechanisms behind published observed alterations in more depth. Using a method to directly see the effect of tES on EEG may offer high temporal resolution data on the brain activity changes/modulations brought on by tES that correlate to various processing stages within the brain. This paper provides unpublished information on a cutting-edge methodology that may reveal details about the dynamics of how the human brain works beyond what is now achievable with existing methods.Keywords: tACS, frequency, EEG, optimal
Procedia PDF Downloads 811120 Invisible Aircraft Using Plasma Display
Authors: C. Ramamoorthy, R. Ranga Raj
Abstract:
In olden days the Ramayana epic depicts the usage of invisible and fuel less aircraft named pushpavimana. The change of color in the reptile family chameleon paves way for the concept of color change phenomenon available in nature. In present scenario the aircrafts are visible so it is easily identified. So there are too many problems from the threatening. Research is still going on about this problem by using Liquid Crystal Display (LCD). Objective of this paper is to find much better to use the concept of invisible aircraft using plasma display through Couple Charged Device camera (CCD), which has a high resolution and can be used for many purposes like spying, defense, etc. Moreover it is cost wise cheap then, escaping the foe viewing.Keywords: CCD camera, chameleon, invisible, plasma display
Procedia PDF Downloads 4031119 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets
Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu
Abstract:
Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.Keywords: GEO SAR, radar, simulation, ship
Procedia PDF Downloads 1771118 Acoustic Analysis of Psycho-Communication Disorders within Moroccan Students
Authors: Brahim Sabir
Abstract:
Psycho-Communication disorders negatively affect the academic curriculum for students in higher education. Thus, understanding these disorders, their causes and effects will give education specialists a tool for the decision, which will lead to the resolution of problems related to the integration of students with Psycho-Communication disorders. It is in this context that a statistical study was conducted, targeting the population object of study, namely Moroccan students. Pathological voice samples were recorded and analyzed acoustically with PRAAT software, in order to build a model that will be the basis for the objective diagnostic.Keywords: psycho-communication disorders, acoustic analysis, PRAAT
Procedia PDF Downloads 3891117 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics
Authors: Varun Kumar, Chandra Shakher
Abstract:
Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy
Procedia PDF Downloads 4981116 Study of Mixing Conditions for Different Endothelial Dysfunction in Arteriosclerosis
Authors: Sara Segura, Diego Nuñez, Miryam Villamil
Abstract:
In this work, we studied the microscale interaction of foreign substances with blood inside an artificial transparent artery system that represents medium and small muscular arteries. This artery system had channels ranging from 75 μm to 930 μm and was fabricated using glass and transparent polymer blends like Phenylbis(2,4,6-trimethylbenzoyl) phosphine oxide, Poly(ethylene glycol) and PDMS in order to be monitored in real time. The setup was performed using a computer controlled precision micropump and a high resolution optical microscope capable of tracking fluids at fast capture. Observation and analysis were performed using a real time software that reconstructs the fluid dynamics determining the flux velocity, injection dependency, turbulence and rheology. All experiments were carried out with fully computer controlled equipment. Interactions between substances like water, serum (0.9% sodium chloride and electrolyte with a ratio of 4 ppm) and blood cells were studied at microscale as high as 400nm of resolution and the analysis was performed using a frame-by-frame observation and HD-video capture. These observations lead us to understand the fluid and mixing behavior of the interest substance in the blood stream and to shed a light on the use of implantable devices for drug delivery at arteries with different Endothelial dysfunction. Several substances were tested using the artificial artery system. Initially, Milli-Q water was used as a control substance for the study of the basic fluid dynamics of the artificial artery system. However, serum and other low viscous substances were pumped into the system with the presence of other liquids to study the mixing profiles and behaviors. Finally, mammal blood was used for the final test while serum was injected. Different flow conditions, pumping rates, and time rates were evaluated for the determination of the optimal mixing conditions. Our results suggested the use of a very fine controlled microinjection for better mixing profiles with and approximately rate of 135.000 μm3/s for the administration of drugs inside arteries.Keywords: artificial artery, drug delivery, microfluidics dynamics, arteriosclerosis
Procedia PDF Downloads 2941115 DEMs: A Multivariate Comparison Approach
Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo
Abstract:
The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variablesKeywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison
Procedia PDF Downloads 1151114 Resolving Problems Experienced by Involving Patients in the Development of Pharmaceutical Products at Post-Launch Stage of Pharmaceutical Product Development
Authors: Clara T. Fatoye, April Betts, Abayomi Odeyemi, Francis A. Fatoye, Isaac O. Odeyemi
Abstract:
Background: The post-launch stage is the last stage in the development of a pharmaceutical product. It is important to involve patients in the development of pharmaceutical products at the post-launch stage, as patients are the end-users of pharmaceutical products. It is expected that involving them might ensure an effective working relationship among the various stakeholders. However, involving patients in the development of pharmaceutical products comes with its problems. Hence, this study examined how to resolve problems experienced by involving patients in the developments of pharmaceutical products’ at post-launch consisting of Positioning of pharmaceutical products (POPP), detailing of pharmaceutical products (DOPP) and reimbursement and Formulary Submission (R&FS). Methods: A questionnaire was used for the present study. It was administered at the ISPOR Glasgow 2017 to 104 participants, all of which were professionals from Market access (MA) and health economics and outcomes research (HEOR) backgrounds. They were asked how the issues experienced by patients can be resolved. Participants responded under six domains as follows: communication, cost, effectiveness, external factors, Quality of life (QoL) and safety. Thematic analysis was carried out to identify strategies to resolve issues experienced by patients at the post-launch stage. Results: Three (3) factors cut across at POPP, DOPP, and R&FS that is (external factors, communication and QoL). The first resolution method was an external factor that is, the relationship with stakeholders and policymakers. Communication was also identified as a resolution method that can help to resolve problems experienced by patients at the post-launch stage. The third method was QoL as perceived by the patients based on professionals’ opinions. Other strategies that could be used to resolve problems experienced were the effectiveness of pharmaceutical products at the DOPP level and cost at R&FS. Conclusion: The study showed that focusing on external factors, communication, and patients’ QoL are methods for resolving issues experienced by involving patients at the post-launch stage of pharmaceutical product development. Hence, effective working relationships between patients, policymakers and stakeholders may help to resolve problems experienced at the post-launch stage. Healthcare policymakers are to be aware of these findings as they may help them to put appropriate strategies in place to enhance the involvement of patients in pharmaceutical product development at the post-launch stage, thereby improving the health outcomes of the patients.Keywords: patients, pharmaceutical products, post-launch stage, quality of life, QoL
Procedia PDF Downloads 1301113 Statistical Pattern Recognition for Biotechnological Process Characterization Based on High Resolution Mass Spectrometry
Authors: S. Fröhlich, M. Herold, M. Allmer
Abstract:
Early stage quantitative analysis of host cell protein (HCP) variations is challenging yet necessary for comprehensive bioprocess development. High resolution mass spectrometry (HRMS) provides a high-end technology for accurate identification alongside with quantitative information. Hereby we describe a flexible HRMS assay platform to quantify HCPs relevant in microbial expression systems such as E. Coli in both up and downstream development by means of MVDA tools. Cell pellets were lysed and proteins extracted, purified samples not further treated before applying the SMART tryptic digest kit. Peptides separation was optimized using an RP-UHPLC separation platform. HRMS-MSMS analysis was conducted on an Orbitrap Velos Elite applying CID. Quantification was performed label-free taking into account ionization properties and physicochemical peptide similarities. Results were analyzed using SIEVE 2.0 (Thermo Fisher Scientific) and SIMCA (Umetrics AG). The developed HRMS platform was applied to an E. Coli expression set with varying productivity and the corresponding downstream process. Selected HCPs were successfully quantified within the fmol range. Analysing HCP networks based on pattern analysis facilitated low level quantification and enhanced validity. This approach is of high relevance for high-throughput screening experiments during upstream development, e.g. for titer determination, dynamic HCP network analysis or product characterization. Considering the downstream purification process, physicochemical clustering of identified HCPs is of relevance to adjust buffer conditions accordingly. However, the technology provides an innovative approach for label-free MS based quantification relying on statistical pattern analysis and comparison. Absolute quantification based on physicochemical properties and peptide similarity score provides a technological approach without the need of sophisticated sample preparation strategies and is therefore proven to be straightforward, sensitive and highly reproducible in terms of product characterization.Keywords: process analytical technology, mass spectrometry, process characterization, MVDA, pattern recognition
Procedia PDF Downloads 249