Search results for: habitat evolution
110 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation
Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony
Abstract:
Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.Keywords: architecture, computation, evolution, standard deviation, urban
Procedia PDF Downloads 133109 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems
Authors: Ibram Khalafalla Roshdy Shokry
Abstract:
This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA
Procedia PDF Downloads 25108 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method
Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola
Abstract:
The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization
Procedia PDF Downloads 389107 Contrastive Analysis of Parameters Registered in Training Rowers and the Impact on the Olympic Performance
Authors: Gheorghe Braniste
Abstract:
The management of the training process in sports is closely related to the awareness of the close connection between performance and the morphological, functional and psychological characteristics of the athlete's body. Achieving high results in Olympic sports is influenced, on the one hand, by the genetically determined characteristics of the body and, on the other hand, by the morphological, functional and motor abilities of the athlete. Taking into account the importance of properly understanding the evolutionary specificity of athletes to assess their competitive potential, this study provides a comparative analysis of the parameters that characterize the growth and development of the level of adaptation of sweeping rowers, considering the growth interval between 12 and 20 years. The study established that, in the multi-annual training process, the bodies of the targeted athletes register significant adaptive changes while analyzing parameters of the morphological, functional, psychomotor and sports-technical spheres. As a result of the influence of physical efforts, both specific and non-specific, there is an increase in the adaptability of the body, its transfer to a much higher level of functionality within the parameters, useful and economical adaptive reactions influenced by environmental factors, be they internal or external. The research was carried out for 7 years, on a group of 28 athletes, following their evolution and recording the specific parameters of each age stage. In order to determine the level of physical, morpho-functional, psychomotor development and technical training of rowers, the screening data were applied at the State University of Physical Education and Sports in the Republic of Moldova. During the research, measurements were made on the waist, in the standing and sitting position, arm span, weight, circumference and chest perimeter, vital capacity of the lungs, with the subsequent determination of the vital index (tolerance level to oxygen deficiency in venous blood in Stange and Genchi breath-taking tests that characterize the level of oxygen saturation, absolute and relative strength of the hand and back, calculation of body mass and morphological maturity indices (Kettle index), body surface area (body gait), psychomotor tests (Romberg test), test-tepping 10 s., reaction to a moving object, visual and auditory-motor reaction, recording of technical parameters of rowing on a competitive distance of 200 m. At the end of the study it was found that highly performance is sports is to be associated on the one hand with the genetically determined characteristics of the body and, on the other hand, with favorable adaptive reactions and energy saving, as well as morphofunctional changes influenced by internal and external environmental factors. The importance of the results obtained at the end of the study was positively reflected in obtaining the maximum level of training of athletes in order to demonstrate performance in large-scale competitions and mostly in the Olympic Games.Keywords: olympics, parameters, performance, peak
Procedia PDF Downloads 123106 Evolution of Antimicrobial Resistance in Shigella since the Turn of 21st Century, India
Authors: Neelam Taneja, Abhishek Mewara, Ajay Kumar
Abstract:
Multidrug resistant shigellae have emerged as a therapeutic challenge in India. At our 2000 bed tertiary care referral centre in Chandigarh, North India, which caters to a large population of 7 neighboring states, antibiotic resistance in Shigella is being constantly monitored. Shigellae are isolated from 3 to 5% of all stool samples. In 1990 nalidixic acid was the drug of choice as 82%, and 63% of shigellae were resistant to ampicillin and cotrimoxazole respectively. Nalidixic acid resistance emerged in 1992 and rapidly increased from 6% during 1994-98 to 86% by the turn of 21st century. In the 1990s, the WHO recommended ciprofloxacin as the drug of choice for empiric treatment of shigellosis in view of the existing high level resistance to agents like chloramphenicol, ampicillin, cotrimoxazole and nalidixic acid. First resistance to ciprofloxacin in S. flexneri at our centre appeared in 2000 and rapidly rose to 46% in 2007 (MIC>4mg/L). In between we had an outbreak of ciprofloxacin resistant S.dysenteriae serotype 1 in 2003. Therapeutic failures with ciprofloxacin occurred with both ciprofloxacin-resistant S. dysenteriae and ciprofloxacin-resistant S. flexneri. The severity of illness was more with ciprofloxacin-resistant strains. Till 2000, elsewhere in the world ciprofloxacin resistance in S. flexneri was sporadic and uncommon, though resistance to co-trimoxazole and ampicillin was common and in some areas resistance to nalidixic acid had also emerged. Fluoroquinolones due to extensive use and misuse for many other illnesses in our region are thus no longer the preferred group of drugs for managing shigellosis in India. WHO presently recommends ceftriaxone and azithromycin as alternative drugs to fluoroquinolone-resistant shigellae, however, overreliance on this group of drugs also seems to soon become questionable considering the emerging cephalosporin-resistant shigellae. We found 15.1% of S. flexneri isolates collected over a period of 9 years (2000-2009) resistant to at least one of the third-generation cephalosporins (ceftriaxone/cefotaxime). The first isolate showing ceftriaxone resistance was obtained in 2001, and we have observed an increase in number of isolates resistant to third generation cephalosporins in S. flexneri 2005 onwards. This situation has now become a therapeutic challenge in our region. The MIC values for Shigella isolates revealed a worrisome rise for ceftriaxone (MIC90:12 mg/L) and cefepime (MIC90:8 mg/L). MIC values for S. dysenteriae remained below 1 mg/L for ceftriaxone, however for cefepime, the MIC90 has raised to 4 mg/L. These infections caused by ceftriaxone-resistant S. flexneri isolates were successfully treated by azithromycin at our center. Most worrisome development in the present has been the emergence of DSA(Decreased susceptibility to azithromycin) which surfaced in 2001 and has increased from 4.3% till 2011 to 34% thereafter. We suspect plasmid-mediated resistance as we detected qnrS1-positive Shigella for the first time from the Indian subcontinent in 2 strains from 2010, indicating a relatively new appearance of this PMQR determinant among Shigella in India. This calls for a continuous and strong surveillance of antibiotic resistance across the country. The prevention of shigellosis by developing cost-effective vaccines is desirable as it will substantially reduce the morbidity associated with diarrhoea in the countryKeywords: Shigella, antimicrobial, resistance, India
Procedia PDF Downloads 229105 Algorithmic Obligations: Proactive Liability for AI-Generated Content and Copyright Compliance
Authors: Aleksandra Czubek
Abstract:
As AI systems increasingly shape content creation, existing copyright frameworks face significant challenges in determining liability for AI-generated outputs. Current legal discussions largely focus on who bears responsibility for infringing works, be it developers, users, or entities benefiting from AI outputs. This paper introduces a novel concept of algorithmic obligations, proposing that AI developers be subject to proactive duties that ensure their models prevent copyright infringement before it occurs. Building on principles of obligations law traditionally applied to human actors, the paper suggests a shift from reactive enforcement to proactive legal requirements. AI developers would be legally mandated to incorporate copyright-aware mechanisms within their systems, turning optional safeguards into enforceable standards. These obligations could vary in implementation across international, EU, UK, and U.S. legal frameworks, creating a multi-jurisdictional approach to copyright compliance. This paper explores how the EU’s existing copyright framework, exemplified by the Copyright Directive (2019/790), could evolve to impose a duty of foresight on AI developers, compelling them to embed mechanisms that prevent infringing outputs. By drawing parallels to GDPR’s “data protection by design,” a similar principle could be applied to copyright law, where AI models are designed to minimize copyright risks. In the UK, post-Brexit text and data mining exemptions are seen as pro-innovation but pose risks to copyright protections. This paper proposes a balanced approach, introducing algorithmic obligations to complement these exemptions. AI systems benefiting from text and data mining provisions should integrate safeguards that flag potential copyright violations in real time, ensuring both innovation and protection. In the U.S., where copyright law focuses on human-centric works, this paper suggests an evolution toward algorithmic due diligence. AI developers would have a duty similar to product liability, ensuring that their systems do not produce infringing outputs, even if the outputs themselves cannot be copyrighted. This framework introduces a shift from post-infringement remedies to preventive legal structures, where developers actively mitigate risks. The paper also breaks new ground by addressing obligations surrounding the training data of large language models (LLMs). Currently, training data is often treated under exceptions such as the EU’s text and data mining provisions or U.S. fair use. However, this paper proposes a proactive framework where developers are obligated to verify and document the legal status of their training data, ensuring it is licensed or otherwise cleared for use. In conclusion, this paper advocates for an obligations-centered model that shifts AI-related copyright law from reactive litigation to proactive design. By holding AI developers to a heightened standard of care, this approach aims to prevent infringement at its source, addressing both the outputs of AI systems and the training processes that underlie them.Keywords: ip, technology, copyright, data, infringement, comparative analysis
Procedia PDF Downloads 18104 The Direct and Indirect Effects of Buddhism on Fertility Rates in General and in Specific Socioeconomic Circumstances of Women
Authors: Szerena Vajkovszki
Abstract:
Our worldwide aging society, especially in developed countries, including members of EU, raise sophisticated sociological and economic issues and challenges to be met. As declining fertility has outstanding influence underlying this trend, numerous studies have attempted to identify, describe, measure and interpret contributing factors of the fertility rate, out of which relatively few revealed the impact of religion. Identified, examined and influential factors affecting birth rate as stated by the present scientific publications are more than a dozen out of which religious beliefs, traditions, and cultural norms were examined first with a special focus on abortion and forms of birth control. Nevertheless, connected to religion, not only these topics are crucial regarding fertility, but many others as well. Among many religious guidelines, we can separate two major categories: direct and indirect. The aim of this research was to understand what are the most crucial identified (family values, gender related behaviors, religious sentiments) and not yet identified most influential contributing religious factors. Above identifying these direct or indirect factors, it is also important to understand to what extent and how do they influence fertility, which requires a wider (inter-discipline) perspective. As proved by previous studies religion has also an influential role on health, mental state, well-being, working activity and many other components that are also related to fertility rates. All these components are inter-related. Hence direct and indirect religious effects can only be well understood if we figure out all necessary fields and their interaction. With the help of semi-structured opened interviews taking place in different countries, it was showed that indeed Buddhism has significant direct and indirect effect on fertility. Hence the initial hypothesis was proved. However, the interviews showed an overall positive effect; the results could only serve for a general understanding of how Buddhism affects fertility. Evolution of Buddhism’s direct and indirect influence may vary in different nations and circumstances according to their specific environmental attributes. According to the local patterns, with special regard to women’s position and role in the society, outstandingly indirect influences could show diversifications. So it is advisory to investigate more for a deeper and clearer understanding of how Buddhism function in different socioeconomic circumstances. For this purpose, a specific and detailed analysis was developed from recent related researches about women’s position (including family roles and economic activity) in Hungary with the intention to be able to have a complex vision of crucial socioeconomic factors influencing fertility. Further interviews and investigations are to be done in order to show a complex vision of Buddhism’s direct and indirect effect on fertility in Hungary to be able to support recommendations and policies pointing to higher fertility rates in the field of social policies. The present research could serve as a general starting point or a common basis for further specific national investigations.Keywords: Buddhism, children, fertility, gender roles, religion, women
Procedia PDF Downloads 151103 Applying Napoleoni's 'Shell-State' Concept to Jihadist Organisations's Rise in Mali, Nigeria and Syria/Iraq, 2011-2015
Authors: Francesco Saverio Angiò
Abstract:
The Islamic State of Iraq and the Levant / Syria (ISIL/S), Al-Qaeda in the Islamic Maghreb (AQIM) and People Committed to the Propagation of the Prophet's Teachings and Jihad, also known as ‘Boko Haram’ (BH), have fought successfully against Syria and Iraq, Mali, Nigeria’s government, respectively. According to Napoleoni, the ‘shell-state’ concept can explain the economic dimension and the financing model of the ISIL insurgency. However, she argues that AQIM and BH did not properly plan their financial model. Consequently, her idea would not be suitable to these groups. Nevertheless, AQIM and BH’s economic performances and their (short) territorialisation suggest that their financing models respond to a well-defined strategy, which they were able to adapt to new circumstances. Therefore, Napoleoni’s idea of ‘shell-state’ can be applied to the three jihadist armed groups. In the last five years, together with other similar entities, ISIL/S, AQIM and BH have been fighting against governments with insurgent tactics and terrorism acts, conquering and ruling a quasi-state; a physical space they presented as legitimate territorial entity, thanks to a puritan version of the Islamic law. In these territories, they have exploited the traditional local economic networks. In addition, they have contributed to the development of legal and illegal transnational business activities. They have also established a justice system and created an administrative structure to supply services. Napoleoni’s ‘shell-state’ can describe the evolution of ISIL/S, AQIM and BH, which has switched from an insurgency to a proto or a quasi-state entity, enjoying a significant share of power over territories and populations. Napoleoni first developed and applied the ‘Shell-state’ concept to describe the nature of groups such as the Palestine Liberation Organisation (PLO), before using it to explain the expansion of ISIL. However, her original conceptualisation emphasises on the economic dimension of the rise of the insurgency, focusing on the ‘business’ model and the insurgents’ financing management skills, which permits them to turn into an organisation. However, the idea of groups which use, coordinate and grab some territorial economic activities (at the same time, encouraging new criminal ones), can also be applied to administrative, social, infrastructural, legal and military levels of their insurgency, since they contribute to transform the insurgency to the same extent the economic dimension does. In addition, according to Napoleoni’s view, the ‘shell-state’ prism is valid to understand the ISIL/S phenomenon, because the group has carefully planned their financial steps. Napoleoni affirmed that ISIL/S carries out activities in order to promote their conversion from a group relying on external sponsors to an entity that can penetrate and condition local economies. On the contrary, ‘shell-state’ could not be applied to AQIM or BH, which are acting more like smugglers. Nevertheless, despite its failure to control territories, as ISIL has been able to do, AQIM and BH have responded strategically to their economic circumstances and have defined specific dynamics to ensure a flow of stable funds. Therefore, Napoleoni’s theory is applicable.Keywords: shell-state, jihadist insurgency, proto or quasi-state entity economic planning, strategic financing
Procedia PDF Downloads 352102 Effect of Climate Change on the Genomics of Invasiveness of the Whitefly Bemisia tabaci Species Complex by Estimating the Effective Population Size via a Coalescent Method
Authors: Samia Elfekih, Wee Tek Tay, Karl Gordon, Paul De Barro
Abstract:
Invasive species represent an increasing threat to food biosecurity, causing significant economic losses in agricultural systems. An example is the sweet potato whitefly, Bemisia tabaci, which is a complex of morphologically indistinguishable species causing average annual global damage estimated at US$2.4 billion. The Bemisia complex represents an interesting model for evolutionary studies because of their extensive distribution and potential for invasiveness and population expansion. Within this complex, two species, Middle East-Asia Minor 1 (MEAM1) and Mediterranean (MED) have invaded well beyond their home ranges whereas others, such as Indian Ocean (IO) and Australia (AUS), have not. In order to understand why some Bemisia species have become invasive, genome-wide sequence scans were used to estimate population dynamics over time and relate these to climate. The Bayesian Skyline Plot (BSP) method as implemented in BEAST was used to infer the historical effective population size. In order to overcome sampling bias, the populations were combined based on geographical origin. The datasets used for this particular analysis are genome-wide SNPs (single nucleotide polymorphisms) called separately in each of the following groups: Sub-Saharan Africa (Burkina Faso), Europe (Spain, France, Greece and Croatia), USA (Arizona), Mediterranean-Middle East (Israel, Italy), Middle East-Central Asia (Turkmenistan, Iran) and Reunion Island. The non-invasive ‘AUS’ species endemic to Australia was used as an outgroup. The main findings of this study show that the BSP for the Sub-Saharan African MED population is different from that observed in MED populations from the Mediterranean Basin, suggesting evolution under a different set of environmental conditions. For MED, the effective size of the African (Burkina Faso) population showed a rapid expansion ≈250,000-310,000 years ago (YA), preceded by a period of slower growth. The European MED populations (i.e., Spain, France, Croatia, and Greece) showed a single burst of expansion at ≈160,000-200,000 YA. The MEAM1 populations from Israel and Italy and the ones from Iran and Turkmenistan are similar as they both show the earlier expansion at ≈250,000-300,000 YA. The single IO population lacked the latter expansion but had the earlier one. This pattern is shared with the Sub-Saharan African (Burkina Faso) MED, suggesting IO also faced a similar history of environmental change, which seems plausible given their relatively close geographical distributions. In conclusion, populations within the invasive species MED and MEAM1 exhibited signatures of population expansion lacking in non-invasive species (IO and AUS) during the Pleistocene, a geological epoch marked by repeated climatic oscillations with cycles of glacial and interglacial periods. These expansions strongly suggested the potential of some Bemisia species’ genomes to affect their adaptability and invasiveness.Keywords: whitefly, RADseq, invasive species, SNP, climate change
Procedia PDF Downloads 126101 Tectono-Stratigraphic Architecture, Depositional Systems and Salt Tectonics to Strike-Slip Faulting in Kribi-Campo-Cameroon Atlantic Margin with an Unsupervised Machine Learning Approach (West African Margin)
Authors: Joseph Bertrand Iboum Kissaaka, Charles Fonyuy Ngum Tchioben, Paul Gustave Fowe Kwetche, Jeannette Ngo Elogan Ntem, Joseph Binyet Njebakal, Ribert Yvan Makosso-Tchapi, François Mvondo Owono, Marie Joseph Ntamak-Nida
Abstract:
Located in the Gulf of Guinea, the Kribi-Campo sub-basin belongs to the Aptian salt basins along the West African Margin. In this paper, we investigated the tectono-stratigraphic architecture of the basin, focusing on the role of salt tectonics and strike-slip faults along the Kribi Fracture Zone with implications for reservoir prediction. Using 2D seismic data and well data interpreted through sequence stratigraphy with integrated seismic attributes analysis with Python Programming and unsupervised Machine Learning, at least six second-order sequences, indicating three main stages of tectono-stratigraphic evolution, were determined: pre-salt syn-rift, post-salt rift climax and post-rift stages. The pre-salt syn-rift stage with KTS1 tectonosequence (Barremian-Aptian) reveals a transform rifting along NE-SW transfer faults associated with N-S to NNE-SSW syn-rift longitudinal faults bounding a NW-SE half-graben filled with alluvial to lacustrine-fan delta deposits. The post-salt rift-climax stage (Lower to Upper Cretaceous) includes two second-order tectonosequences (KTS2 and KTS3) associated with the salt tectonics and Campo High uplift. During the rift-climax stage, the growth of salt diapirs developed syncline withdrawal basins filled by early forced regression, mid transgressive and late normal regressive systems tracts. The early rift climax underlines some fine-grained hangingwall fans or delta deposits and coarse-grained fans from the footwall of fault scarps. The post-rift stage (Paleogene to Neogene) contains at least three main tectonosequences KTS4, KTS5 and KTS6-7. The first one developed some turbiditic lobe complexes considered as mass transport complexes and feeder channel-lobe complexes cutting the unstable shelf edge of the Campo High. The last two developed submarine Channel Complexes associated with lobes towards the southern part and braided delta to tidal channels towards the northern part of the Kribi-Campo sub-basin. The reservoir distribution in the Kribi-Campo sub-basin reveals some channels, fan lobes reservoirs and stacked channels reaching up to the polygonal fault systems.Keywords: tectono-stratigraphic architecture, Kribi-Campo sub-basin, machine learning, pre-salt sequences, post-salt sequences
Procedia PDF Downloads 56100 Thermal Ageing of a 316 Nb Stainless Steel: From Mechanical and Microstructural Analyses to Thermal Ageing Models for Long Time Prediction
Authors: Julien Monnier, Isabelle Mouton, Francois Buy, Adrien Michel, Sylvain Ringeval, Joel Malaplate, Caroline Toffolon, Bernard Marini, Audrey Lechartier
Abstract:
Chosen to design and assemble massive components for nuclear industry, the 316 Nb austenitic stainless steel (also called 316 Nb) suits well this function thanks to its mechanical, heat and corrosion handling properties. However, these properties might change during steel’s life due to thermal ageing causing changes within its microstructure. Our main purpose is to determine if the 316 Nb will keep its mechanical properties after an exposition to industrial temperatures (around 300 °C) during a long period of time (< 10 years). The 316 Nb is composed by different phases, which are austenite as main phase, niobium-carbides, and ferrite remaining from the ferrite to austenite transformation during the process. Our purpose is to understand thermal ageing effects on the material microstructure and properties and to submit a model predicting the evolution of 316 Nb properties as a function of temperature and time. To do so, based on Fe-Cr and 316 Nb phase diagrams, we studied the thermal ageing of 316 Nb steel alloys (1%v of ferrite) and welds (10%v of ferrite) for various temperatures (350, 400, and 450 °C) and ageing time (from 1 to 10.000 hours). Higher temperatures have been chosen to reduce thermal treatment time by exploiting a kinetic effect of temperature on 316 Nb ageing without modifying reaction mechanisms. Our results from early times of ageing show no effect on steel’s global properties linked to austenite stability, but an increase of ferrite hardness during thermal ageing has been observed. It has been shown that austenite’s crystalline structure (cfc) grants it a thermal stability, however, ferrite crystalline structure (bcc) favours iron-chromium demixion and formation of iron-rich and chromium-rich phases within ferrite. Observations of thermal ageing effects on ferrite’s microstructure were necessary to understand the changes caused by the thermal treatment. Analyses have been performed by using different techniques like Atomic Probe Tomography (APT) and Differential Scanning Calorimetry (DSC). A demixion of alloy’s elements leading to formation of iron-rich (α phase, bcc structure), chromium-rich (α’ phase, bcc structure), and nickel-rich (fcc structure) phases within the ferrite have been observed and associated to the increase of ferrite’s hardness. APT results grant information about phases’ volume fraction and composition, allowing to associate hardness measurements to the volume fractions of the different phases and to set up a way to calculate α’ and nickel-rich particles’ growth rate depending on temperature. The same methodology has been applied to DSC results, which allowed us to measure the enthalpy of α’ phase dissolution between 500 and 600_°C. To resume, we started from mechanical and macroscopic measurements and explained the results through microstructural study. The data obtained has been match to CALPHAD models’ prediction and used to improve these calculations and employ them to predict 316 Nb properties’ change during the industrial process.Keywords: stainless steel characterization, atom probe tomography APT, vickers hardness, differential scanning calorimetry DSC, thermal ageing
Procedia PDF Downloads 9399 Unleashing the Power of Cerebrospinal System for a Better Computer Architecture
Authors: Lakshmi N. Reddi, Akanksha Varma Sagi
Abstract:
Studies on biomimetics are largely developed, deriving inspiration from natural processes in our objective world to develop novel technologies. Recent studies are diverse in nature, making their categorization quite challenging. Based on an exhaustive survey, we developed categorizations based on either the essential elements of nature - air, water, land, fire, and space, or on form/shape, functionality, and process. Such diverse studies as aircraft wings inspired by bird wings, a self-cleaning coating inspired by a lotus petal, wetsuits inspired by beaver fur, and search algorithms inspired by arboreal ant path networks lend themselves to these categorizations. Our categorizations of biomimetic studies allowed us to define a different dimension of biomimetics. This new dimension is not restricted to inspiration from the objective world. It is based on the premise that the biological processes observed in the objective world find their reflections in our human bodies in a variety of ways. For example, the lungs provide the most efficient example for liquid-gas phase exchange, the heart exemplifies a very efficient pumping and circulatory system, and the kidneys epitomize the most effective cleaning system. The main focus of this paper is to bring out the magnificence of the cerebro-spinal system (CSS) insofar as it relates to our current computer architecture. In particular, the paper uses four key measures to analyze the differences between CSS and human- engineered computational systems. These are adaptability, sustainability, energy efficiency, and resilience. We found that the cerebrospinal system reveals some important challenges in the development and evolution of our current computer architectures. In particular, the myriad ways in which the CSS is integrated with other systems/processes (circulatory, respiration, etc) offer useful insights on how the human-engineered computational systems could be made more sustainable, energy-efficient, resilient, and adaptable. In our paper, we highlight the energy consumption differences between CSS and our current computational designs. Apart from the obvious differences in materials used between the two, the systemic nature of how CSS functions provides clues to enhance life-cycles of our current computational systems. The rapid formation and changes in the physiology of dendritic spines and their synaptic plasticity causing memory changes (ex., long-term potentiation and long-term depression) allowed us to formulate differences in the adaptability and resilience of CSS. In addition, the CSS is sustained by integrative functions of various organs, and its robustness comes from its interdependence with the circulatory system. The paper documents and analyzes quantifiable differences between the two in terms of the four measures. Our analyses point out the possibilities in the development of computational systems that are more adaptable, sustainable, energy efficient, and resilient. It concludes with the potential approaches for technological advancement through creation of more interconnected and interdependent systems to replicate the effective operation of cerebro-spinal system.Keywords: cerebrospinal system, computer architecture, adaptability, sustainability, resilience, energy efficiency
Procedia PDF Downloads 9798 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning
Authors: Ali Kazemi
Abstract:
The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis
Procedia PDF Downloads 5797 Agri-Food Transparency and Traceability: A Marketing Tool to Satisfy Consumer Awareness Needs
Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli
Abstract:
The link between man and food plays, in the social and economic system, a central role where cultural and multidisciplinary aspects intertwine: food is not only nutrition, but also communication, culture, politics, environment, science, ethics, fashion. This multi-dimensionality has many implications in the food economy. In recent years, the consumer became more conscious about his food choices, involving a consistent change in consumption models. This change concerns several aspects: awareness of food system issues, employment of socially and environmentally conscious decision-making, food choices based on different characteristics than nutritional ones i.e. origin of food, how it’s produced, and who’s producing it. In this frame the ‘consumption choices’ and the ‘interests of the citizen’ become one part of the others. The figure of the ‘Citizen Consumer’ is born, a responsible and ethically motivated individual to change his lifestyle, achieving the goal of sustainable consumption. Simultaneously the branding, that before was guarantee of the product quality, today is questioned. In order to meet these needs, Agri-Food companies are developing specific product lines that follow two main philosophies: ‘Back to basics’ and ‘Less is more’. However, the issue of ethical behavior does not seem to find an adequate on market offer. Most likely due to a lack of attention on the communication strategy used, very often based on market logic and rarely on ethical one. The label in its classic concept of ‘clean labeling’ can no longer be the only instrument through which to convey product information and its evolution towards a concept of ‘clear label’ is necessary to embrace ethical and transparent concepts in progress the process of democratization of the Food System. The implementation of a voluntary traceability path, relying on the technological models of the Internet of Things or Industry 4.0, would enable the Agri-Food Supply Chain to collect data that, if properly treated, could satisfy the information need of consumers. A change of approach is therefore proposed towards Agri-Food traceability that is no longer intended as a tool to be used to respond to the legislator, but rather as a promotional tool useful to tell the company in a transparent manner and then reach the slice of the market of food citizens. The use of mobile technology can also facilitate this information transfer. However, in order to guarantee maximum efficiency, an appropriate communication model based on the ethical communication principles should be used, which aims to overcome the pipeline communication model, to offer the listener a new way of telling the food product, based on real data collected through processes traceability. The Citizen Consumer is therefore placed at the center of the new model of communication in which he has the opportunity to choose what to know and how. The new label creates a virtual access point capable of telling the product according to different point of views, following the personal interests and offering the possibility to give several content modalities to support different situations and usability.Keywords: agri food traceability, agri-food transparency, clear label, food system, internet of things
Procedia PDF Downloads 15896 Analysis of Flow Dynamics of Heated and Cooled Pylon Upstream to the Cavity past Supersonic Flow with Wall Heating and Cooling
Authors: Vishnu Asokan, Zaid M. Paloba
Abstract:
Flow over cavities is an important area of research due to the significant change in flow physics caused by cavity aspect ratio, free stream Mach number and the nature of upstream boundary layer approaching the cavity leading edge. Cavity flow finds application in aircraft wheel well, weapons bay, combustion chamber of scramjet engines, etc. These flows are highly unsteady, compressible and turbulent and it involves mass entrainment coupled with acoustics phenomenon. Variation of flow dynamics in an angled cavity with a heated and cooled pylon upstream to the cavity with spatial combinations of heat flux addition and removal to the wall studied numerically. The goal of study is to investigate the effect of energy addition, removal to the cavity walls and pylon cavity flow dynamics. Preliminary steady state numerical simulations on inclined cavities with heat addition have shown that wall pressure profiles, as well as the recirculation, are influenced by heat transfer to the compressible fluid medium. Such a hybrid control of cavity flow dynamics in the form of heat transfer and pylon geometry can open out greater opportunities in enhancement of mixing and flame holding requirements of supersonic combustors. Addition of pylon upstream to the cavity reduces the acoustic oscillations emanating from the geometry. A numerical unsteady analysis of supersonic flow past cavities exposed to cavity wall heating and cooling with heated and cooled pylon helps to get a clear idea about the oscillation suppression in the cavity. A Cavity of L/D 4 and aft wall angle 22 degree with an upstream pylon of h/D=1.5 mm with a wall angle 29 degree exposed to supersonic flow of Mach number 2 and heat flux of 40 W/cm² and -40 W/cm² modeled for the above study. In the preliminary study, the domain is modeled and validated numerically with a turbulence model of SST k-ω using an HLLC implicit scheme. Both qualitative and quantitative flow data extracted and analyzed using advanced CFD tools. Flow visualization is done using numerical Schlieren method as the fluid medium gives the density variation. The heat flux addition to the wall increases the secondary vortex size of the cavity and removal of energy leads to the reduction in vortex size. The flow field turbulence seems to be increasing at higher heat flux. The shear layer thickness increases as heat flux increases. The steady state analysis of wall pressure shows that there is variation on wall pressure as heat flux increases. Shift in frequency of unsteady wall pressure analysis is an interesting observation for the above study. The time averaged skin friction seems to be reducing at higher heat flux due to the variation in viscosity of fluid inside the cavity.Keywords: energy addition, frequency shift, Numerical Schlieren, shear layer, vortex evolution
Procedia PDF Downloads 14395 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer
Authors: Binder Hans
Abstract:
Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas
Procedia PDF Downloads 14894 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles
Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo
Abstract:
Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.Keywords: HRRP, NCTI, simulated/synthetic database, SVD
Procedia PDF Downloads 35493 Evaluation of Sustained Improvement in Trauma Education Approaches for the College of Emergency Nursing Australasia Trauma Nursing Program
Authors: Pauline Calleja, Brooke Alexander
Abstract:
In 2010 the College of Emergency Nursing Australasia (CENA) undertook sole administration of the Trauma Nursing Program (TNP) across Australia. The original TNP was developed from recommendations by the Review of Trauma and Emergency Services-Victoria. While participant and faculty feedback about the program was positive, issues were identified that were common for industry training programs in Australia. These issues included didactic approaches, with many lectures and little interaction/activity for participants. Participants were not necessarily encouraged to undertake deep learning due to the teaching and learning principles underpinning the course, and thus participants described having to learn by rote, and only gain a surface understanding of principles that were not always applied to their working context. In Australia, a trauma or emergency nurse may work in variable contexts that impact on practice, especially where resources influence scope and capacity of hospitals to provide trauma care. In 2011, a program review was undertaken resulting in major changes to the curriculum, teaching, learning and assessment approaches. The aim was to improve learning including a greater emphasis on pre-program preparation for participants, the learning environment and clinically applicable contextualized outcomes participants experienced. Previously if participants wished to undertake assessment, they were given a take home examination. The assessment had poor uptake and return, and provided no rigor since assessment was not invigilated. A new assessment structure was enacted with an invigilated examination during course hours. These changes were implemented in early 2012 with great improvement in both faculty and participant satisfaction. This presentation reports on a comparison of participant evaluations collected from courses post implementation in 2012 and in 2015 to evaluate if positive changes were sustained. Methods: Descriptive statistics were applied in analyzing evaluations. Since all questions had more than 20% of cells with a count of <5, Fisher’s Exact Test was used to identify significance (p = <0.05) between groups. Results: A total of fourteen group evaluations were included in this analysis, seven CENA TNP groups from 2012 and seven from 2015 (randomly chosen). A total of 173 participant evaluations were collated (n = 81 from 2012 and 92 from 2015). All course evaluations were anonymous, and nine of the original 14 questions were applicable for this evaluation. All questions were rated by participants on a five-point Likert scale. While all items showed improvement from 2012 to 2015, significant improvement was noted in two items. These were in regard to the content being delivered in a way that met participant learning needs and satisfaction with the length and pace of the program. Evaluation of written comments supports these results. Discussion: The aim of redeveloping the CENA TNP was to improve learning and satisfaction for participants. These results demonstrate that initial improvements in 2012 were able to be maintained and in two essential areas significantly improved. Changes that increased participant engagement, support and contextualization of course materials were essential for CENA TNP evolution.Keywords: emergency nursing education, industry training programs, teaching and learning, trauma education
Procedia PDF Downloads 27092 Logic of Appearance vs Explanatory Logic: A Systemic Functional Linguistics Approach to the Evolution of Communicative Strategies in the European Union Institutional Discourse
Authors: Antonio Piga
Abstract:
The issue of European cultural identity has become a prominent topic of discussion among political actors in the wake of the unsuccessful referenda held in France and the Netherlands in May and June 2006. The „period of reflection‟ announced by the European Council at the conclusion of June 2006 has provided an opportunity for the implementation of several initiatives and programmes designed to „bridge the gap‟ between the EU institutions and its citizens. Specific programmes were designed with the objective of enhancing the European Commission‟s external communication of its activities. Subsequently, further plans for democracy, debate, and dialogue were devised with the objective of fostering open and extensive discourse between EU institutions and citizens. Further documentation on communication policy emphasised the necessity of developing linguistic techniques to re-engage disenchanted or uninformed citizens with the European project. It was observed that the European Union is perceived as a „faceless‟ entity, which is attributed to the absence of a distinct public identity vis-à-vis its institutions. This contribution presents an analysis of a collection of informative publications regarding the European Union, entitled “Europe on the Move”. This collection of booklets provides comprehensive information about the European Union, including its historical origins, core values, and historical development, as well as its achievements, strategic objectives, policies, and operational procedures. The theoretical framework adopted for the longitudinal linguistic analysis of EU discourse is that of Systemic Functional Linguistics (SFL). In more detail, this study considers two basic systems of relations between clauses: firstly, the degree of interdependency (or taxis) and secondly, the logico-semantic relation of expansion. The former refers to the structural markers of grammatical relations between clauses within sentences, namely paratactic, hypotactic and embedded relations. The latter pertains to various logicosemantic relationships existing between the primary and secondary members of the clause nexus. These relationships include how the secondary clause expands the primary clause, which may be achieved by (a) elaborating it, (b) extending it or (c) enhancing it. This study examines the impact of the European Commission‟s post-referendum communication methods on the portrayal of Europe, its role in facilitating the EU institutional process, and its articulation of a specific EU identity linked to distinct values. The research reveals that the language employed by the EU is evidently grounded in an explanatory logic, elucidating the rationale behind their institutionalised acts. Nevertheless, the minimal use of hypotaxis in the post-referendum booklets, coupled with the inconsistent yet increasing ratio of parataxis to hypotaxis, may suggest a potential shift towards a logic of appearance, characterised by a predominant reliance on coordination and additive, and elaborative logico-semantic relations.Keywords: systemic functional linguistics, logic of appearance, explanatory logic, interdependency, logico-semantic relation
Procedia PDF Downloads 691 Inclusion Advances of Disabled People in Higher Education: Possible Alignment with the Brazilian Statute of the Person with Disabilities
Authors: Maria Cristina Tommaso, Maria Das Graças L. Silva, Carlos Jose Pacheco
Abstract:
Have the advances of the Brazilian legislation reflected or have been consonant with the inclusion of PwD in higher education? In 1990 the World Declaration on Education for All, a document organized by the United Nations Educational, Scientific and Cultural Organization (UNESCO), stated that the basic learning needs of people with disabilities, as they were called, required special attention. Since then, legislation in signatory countries such as Brazil has made considerable progress in guaranteeing, in a gradual and increasing manner, the rights of persons with disabilities to education. Principles, policies, and practices of special educational needs were created and guided action at the regional, national and international levels on the structure of action in Special Education such as administration, recruitment of educators and community involvement. Brazilian Education Law No. 3.284 of 2003 ensures inclusion of people with disabilities in Brazilian higher education institutions and also in 2015 the Law 13,146/2015 - Brazilian Law on the Inclusion of Persons with Disabilities (Statute of the Person with Disabilities) regulates the inclusion of PwD by the guarantee of their rights. This study analyses data related to people with disability inclusion in High Education in the south region of Rio de Janeiro State - Brazil during the period between 2008 and 2018, based in its correlation with the changes in the Brazilian legislation in the last ten years that were subjected by PwD inclusion processes in the Brazilian High Education Systems. The region studied is composed by sixteen cities and this research refers to the largest one, Volta Redonda that represents 25 percent of the total regional population. The PwD reception process had the dicing data at the Volta Redonda University Center with 35 percent of high education students in this territorial area. The research methodology analyzed the changes occurring in the legislation about the inclusion of people with disability in High Education in the last ten years and its impacts on the samples of this study during the period between 2008 and 2018. It was verified an expressive increasing of the number of PwD students, from two in 2008 to 190 PwD students in 2018. The data conclusions are presented in quantitative terms and the aim of this study was to verify the effectiveness of the PwD inclusion in High Education, allowing visibility of this social group. This study verified that the fundamental human rights guarantees have a strong relation to the advances of legislation and the State as a guarantor instance of the rights of the people with disability and must be considered a mean of consolidation of their education opportunities isonomy. The recognition of full rights and the inclusion of people with disabilities requires the efforts of those who have decision-making power. This study aimed to demonstrate that legislative evolution is an effective instrument in the social integration of people with disabilities. The study confirms the fundamental role of the state in guaranteeing human rights and demonstrates that legislation not only protects the interests of vulnerable social groups, but can also, and this is perhaps its main mission, to change behavior patterns and provoke the social transformation necessary to the reduction of inequality of opportunity.Keywords: high education, inclusion, legislation, people with disability
Procedia PDF Downloads 15290 The New Contemporary Cross-Cultural Buddhist Woman and Her Attitude and Perception toward Motherhood
Authors: Szerena Vajkovszki
Abstract:
Among the relatively large volume of literature, the role and perception of women in Buddhism have been examined from various perspectives such as theology, history, anthropology, and feminism. When Buddhism spread to the West, women had a major role in its adaption and development. The meeting of different cultures and social structures had the fruit of a necessity to change. As Buddhism gained attention in the West, it produced a Buddhist feminist identity across national and ethnic boundaries. So globalization produced a contemporary cross-cultural Buddhist Women. The aim of the research is to find out the new role of such a Buddhist woman in aging societies. More precisely to understand what effect this contemporary Buddhist religion may have, direct or indirect, on fertility. Our worldwide aging society, especially in developed countries, including members of EU, raise sophisticated sociological and economic issues and challenges to be met. As declining fertility has outstanding influence underlying this trend, numerous studies have attempted to identify, describe, measure and interpret contributing factors of the fertility rate, out of which relatively few revealed the impact of religion. Among many religious guidelines, we can separate two major categories: direct and indirect. The aim of this research was to understand what are the most crucial identified (family values, gender related behaviors, religious sentiments) and not yet identified most influential contributing contemporary Buddhist religious factors. Above identifying these direct or indirect factors, it is also important to understand to what extent and how do they influence fertility, which requires a wider (inter-discipline) perspective. As proved by previous studies religion has also an influential role in health, mental state, well-being, working activity and many other components that are also related to fertility rates. All these components are inter-related, hence direct and indirect religious effects can only be well understood, if we figure out all necessary fields and their interaction. With the help of semi-structured opened interviews taking place in different countries, it was showed that indeed Buddhism has significant direct and indirect effect on fertility, hence the initial hypothesis was proved. However, the interviews showed an overall positive effect, the results could only serve for a general understanding about how Buddhism affects fertility. Evolution of Buddhism’s direct and indirect influence may vary in different nations and circumstances according to their specific environmental attributes. According to the local patterns, with special regard to women’s position and role in the society, outstandingly indirect influences could show diversifications. So it is advisory to investigate more for a deeper and clearer understanding of how Buddhism function in different socioeconomic circumstances. For example, in Hungary after the period of secularization more and more people tended to be attracted toward some transcendent values which could be an explanation for the rising number of Buddhists in the country. The present research could serve as a general starting point or a common basis for further specific national investigations how contemporary Buddhism affects fertility.Keywords: contemporary Buddhism, cross-cultural woman, fertility, gender roles, religion
Procedia PDF Downloads 15389 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 29788 Addressing Sustainable Development Goals in Palestine: Conflict, Sustainability, and Human Rights
Authors: Nowfiya Humayoon
Abstract:
The Sustainable Development Goals were launched by the UNO in 2015 as a global initiative aimed at eradicating poverty, safeguarding the environment, and promoting peace and prosperity with the target year of 2030. SDGs are vital for achieving global peace, prosperity, and sustainability. Like all nations of the world, these goals are crucial to Palestine but challenging due to the ongoing crisis. Effective action toward achieving each Sustainable Development Goals (SDGs) in Palestine has been severely challenged due to political instability, limited access to resources, International Aid Constraints, Economic blockade, etc., right from the beginning. In the context of the ongoing conflict, there are severe violations of international humanitarian law, which include targeting civilians, using excessive force, and blocking humanitarian aid, which has led to significant civilian casualties, sufferings, and deaths. Therefore, addressing the Sustainable Development Goals is imperative in ensuring human rights, combating violations and fostering sustainability. Methodology: The study adopts a historical, analytical and quantitative approach to evaluate the impact of the ongoing conflict on SDGs in Palestine, with a focus on sustainability and human rights. It examines historical documents, reports of international organizations and regional organizations, recent journal and newspaper articles, and other relevant literature to trace the evolution and the on-ground realities of the conflict and its effects. Quantitative data are collected by analyzing statistical reports from government agencies, non-governmental organizations (NGOs) and international bodies. Databases from World Bank, United Nations and World Health Organizations are utilized. Various health and economic indicators on mortality rates, infant mortality rates and income levels are also gathered. Major Findings: The study reveals profound challenges in achieving the Sustainable Development Goals (SDGs) in Palestine, which include economic blockades and restricted access to resources that have left a substantial portion of the population living below the poverty line, overburdened healthcare facilities struggling to cope with the demands, shortages of medical supplies, disrupted educational systems, with many schools destroyed or repurposed, and children facing significant barriers to accessing quality education, damaged infrastructure, restricted access to clean water and sanitation services and limited access to reliable energy sources . Conclusion: The ongoing crisis in Palestine has drastically affected progress towards the Sustainable Development Goals (SDGs), causing innumerable crises. Violations of international humanitarian law have caused substantial suffering and loss of life. Immediate and coordinated global action and efforts are crucial in addressing these challenges in order to uphold humanitarian values and promote sustainable development in the region.Keywords: genocide, human rights, occupation, sustainable development goals
Procedia PDF Downloads 1487 Polarization as a Proxy of Misinformation Spreading
Authors: Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, Ana Lucía Schmidt, Fabiana Zollo
Abstract:
Information, rumors, and debates may shape and impact public opinion heavily. In the latest years, several concerns have been expressed about social influence on the Internet and the outcome that online debates might have on real-world processes. Indeed, on online social networks users tend to select information that is coherent to their system of beliefs and to form groups of like-minded people –i.e., echo chambers– where they reinforce and polarize their opinions. In this way, the potential benefits coming from the exposure to different points of view may be reduced dramatically, and individuals' views may become more and more extreme. Such a context fosters misinformation spreading, which has always represented a socio-political and economic risk. The persistence of unsubstantiated rumors –e.g., the hypothetical and hazardous link between vaccines and autism– suggests that social media do have the power to misinform, manipulate, or control public opinion. As an example, current approaches such as debunking efforts or algorithmic-driven solutions based on the reputation of the source seem to prove ineffective against collective superstition. Indeed, experimental evidence shows that confirmatory information gets accepted even when containing deliberately false claims while dissenting information is mainly ignored, influences users’ emotions negatively and may even increase group polarization. Moreover, confirmation bias has been shown to play a pivotal role in information cascades, posing serious warnings about the efficacy of current debunking efforts. Nevertheless, mitigation strategies have to be adopted. To generalize the problem and to better understand social dynamics behind information spreading, in this work we rely on a tight quantitative analysis to investigate the behavior of more than 300M users w.r.t. news consumption on Facebook over a time span of six years (2010-2015). Through a massive analysis on 920 news outlets pages, we are able to characterize the anatomy of news consumption on a global and international scale. We show that users tend to focus on a limited set of pages (selective exposure) eliciting a sharp and polarized community structure among news outlets. Moreover, we find similar patterns around the Brexit –the British referendum to leave the European Union– debate, where we observe the spontaneous emergence of two well segregated and polarized groups of users around news outlets. Our findings provide interesting insights into the determinants of polarization and the evolution of core narratives on online debating. Our main aim is to understand and map the information space on online social media by identifying non-trivial proxies for the early detection of massive informational cascades. Furthermore, by combining users traces, we are finally able to draft the main concepts and beliefs of the core narrative of an echo chamber and its related perceptions.Keywords: information spreading, misinformation, narratives, online social networks, polarization
Procedia PDF Downloads 28886 Non-Invasive Characterization of the Mechanical Properties of Arterial Walls
Authors: Bruno RamaëL, GwenaëL Page, Catherine Knopf-Lenoir, Olivier Baledent, Anne-Virginie Salsac
Abstract:
No routine technique currently exists for clinicians to measure the mechanical properties of vascular walls non-invasively. Most of the data available in the literature come from traction or dilatation tests conducted ex vivo on native blood vessels. The objective of the study is to develop a non-invasive characterization technique based on Magnetic Resonance Imaging (MRI) measurements of the deformation of vascular walls under pulsating blood flow conditions. The goal is to determine the mechanical properties of the vessels by inverse analysis, coupling imaging measurements and numerical simulations of the fluid-structure interactions. The hyperelastic properties are identified using Solidworks and Ansys workbench (ANSYS Inc.) solving an optimization technique. The vessel of interest targeted in the study is the common carotid artery. In vivo MRI measurements of the vessel anatomy and inlet velocity profiles was acquired along the facial vascular network on a cohort of 30 healthy volunteers: - The time-evolution of the blood vessel contours and, thus, of the cross-section surface area was measured by 3D imaging angiography sequences of phase-contrast MRI. - The blood flow velocity was measured using a 2D CINE MRI phase contrast (PC-MRI) method. Reference arterial pressure waveforms were simultaneously measured in the brachial artery using a sphygmomanometer. The three-dimensional (3D) geometry of the arterial network was reconstructed by first creating an STL file from the raw MRI data using the open source imaging software ITK-SNAP. The resulting geometry was then transformed with Solidworks into volumes that are compatible with Ansys softwares. Tetrahedral meshes of the wall and fluid domains were built using the ANSYS Meshing software, with a near-wall mesh refinement method in the case of the fluid domain to improve the accuracy of the fluid flow calculations. Ansys Structural was used for the numerical simulation of the vessel deformation and Ansys CFX for the simulation of the blood flow. The fluid structure interaction simulations showed that the systolic and diastolic blood pressures of the common carotid artery could be taken as reference pressures to identify the mechanical properties of the different arteries of the network. The coefficients of the hyperelastic law were identified using Ansys Design model for the common carotid. Under large deformations, a stiffness of 800 kPa is measured, which is of the same order of magnitude as the Young modulus of collagen fibers. Areas of maximum deformations were highlighted near bifurcations. This study is a first step towards patient-specific characterization of the mechanical properties of the facial vessels. The method is currently applied on patients suffering from facial vascular malformations and on patients scheduled for facial reconstruction. Information on the blood flow velocity as well as on the vessel anatomy and deformability will be key to improve surgical planning in the case of such vascular pathologies.Keywords: identification, mechanical properties, arterial walls, MRI measurements, numerical simulations
Procedia PDF Downloads 31985 Studies on the Bioactivity of Different Solvents Extracts of Selected Marine Macroalgae against Fish Pathogens
Authors: Mary Ghobrial, Sahar Wefky
Abstract:
Marine macroalgae have proven to be rich source of bioactive compounds with biomedical potential, not only for human but also for veterinary medicine. Emergence of microbial disease in aquaculture industries implies serious loses. Usage of commercial antibiotics for fish disease treatment produces undesirable side effects. Marine organisms are a rich source of structurally novel biologically active metabolites. Competition for space and nutrients led to the evolution of antimicrobial defense strategies in the aquatic environment. The interest in marine organisms as a potential and promising source of pharmaceutical agents has increased in the last years. Many bioactive and pharmacologically active substances have been isolated from microalgae. Compounds with antibacterial, antifungal and antiviral activities have been also detected in green, brown and red algae. Selected species of marine benthic algae belonging to the Phaeophyta and Rhodophyta, collected from different coastal areas of Alexandria (Egypt), were investigated for their antibacterial and antifungal, activities. Macroalgae samples were collected during low tide from the Alexandria Mediterranean coast. Samples were air dried under shade at room temperature. The dry algae were ground, using electric mixer grinder. They were soaked in 10 ml of each of the solvents acetone, ethanol, methanol and hexane. Antimicrobial activity was evaluated using well-cut diffusion technique In vitro screening of organic solvent extracts from the marine macroalgae Laurencia pinnatifida, Pterocladia capillaceae, Stepopodium zonale, Halopteris scoparia and Sargassum hystrix, showed specific activity in inhibiting the growth of five virulent strains of bacteria pathogenic to fish Pseudomonas fluorescens, Aeromonas hydrophila, Vibrio anguillarum, V. tandara, Escherichia coli and two fungi Aspergillus flavus and A. niger. Results showed that, acetone and ethanol extracts of all test macroalgae exhibited antibacterial activity, while acetone extract of the brown Sargassum hystrix displayed the highest antifungal activity. The extracts of seaweeds inhibited bacteria more strongly than fungi and species of the Rhodophyta showed the greatest activity against the bacteria rather than fungi tested. The gas liquid chromatography coupled with mass spectrometry detection technique allows good qualitative and quantitative analysis of the fractionated extracts with high sensitivity to the smaller amounts of components. Results indicated that, the main common component in the acetone extracts of L. pinnatifida and P. capillacea is 4-hydroxy-4-methyl2-pentanone representing 64.38 and 58.60%. Thus, the extracts derived from the red macroalgae were more efficient than those obtained from the brown macroalgae in combating bacterial pathogens rather than pathogenic fungi. The most preferred species over all was the red Laurencia pinnatifida. In conclusion, the present study provides the potential of red and brown macroalgae extracts for development of anti-pathogenic agents for use in fish aquaculture.Keywords: bacteria, fungi, extracts, solvents
Procedia PDF Downloads 43784 Primary and Secondary Big Bangs Theory of Creation of Universe
Authors: Shyam Sunder Gupta
Abstract:
The current theory for the creation of the universe, the Big Bang theory, is widely accepted but leaves some unanswered questions. It does not explain the origin of the singularity or what causes the Big Bang. The theory of the Big Bang also does not explain why there is such a huge amount of dark energy and dark matter in our universe. Also, there is a question related to one universe or multiple universes which needs to be answered. This research addresses these questions using the Bhagvat Puran and other Vedic scriptures as the basis. There is a Unique Pure Energy Field that is eternal, infinite, and finest of all and never transforms when in its original form. The Carrier Particles of Unique Pure Energy are Param-anus- Fundamental Energy Particles. Param-anus and a combination of these particles create bigger particles from which the Universe gets created. For creation to initiate, Unique Pure Energy is represented in three phases: positive phase energy, neutral phase eternal time energy and negative phase energy. Positive phase energy further expands in three forms of creative energies (CE1, CE2andCE3). From CE1 energy, three energy modes, mode of activation, mode of action, and mode of darkness, were created. From these three modes, 16 Principles, subtlest forms of energies, namely Pradhan, Mahat-tattva, Time, Ego, Intellect, Mind, Sound, Space, Touch, Air, Form, Fire, Taste, Water, Smell, and Earth, get created. In the Mahat-tattva, dominant in the Mode of Darkness, CE1 energy creates innumerable primary singularities from seven principles: Pradhan, Mahat-tattva, Ego, Sky, Air, Fire, and Water. CE1 energy gets divided as CE2 and enters, along with three modes and time, in each singularity, and primary Big Bang takes place, and innumerable Invisible Universes get created. Each Universe has seven coverings of 7 principles, and each layer is 10 times thicker than the previous layer. By energy CE2, space in Invisible Universe under the coverings is divided into two halves. In the lower half, the process of evolution gets initiated, and seeds of 24 elements get created, out of which 5 fundamental elements, building blocks of matter, Sky, Air, Fire, Water and Earth, create seeds of stars, planets, galaxies and all other matter. Since 5 fundamental elements get created out of the mode of darkness, it explains why there is so much dark energy and dark matter in our Universe. This process of creation, in the lower half of Invisible universe continues for 2.16 billion years. Further, in the lower part of the energy field, exactly at the Centre of Invisible Universe, Secondary Singularity is created, through which, by force of Mode of Action, Secondary Big Bang takes place and Visible Universe gets created in the shape of Lotus Flower, expanding into upper part. Visible matter starts appearing after a gap of 360,000 years. Within the Visible Universe, a small part gets created known as the Phenomenal Material World, which is our Solar System, the sun being in the Centre. Diameter of Solar planetary system is 6.4 billion km.Keywords: invisible universe, phenomenal material world, primary Big Bang, secondary Big Bang, singularities, visible universe
Procedia PDF Downloads 8983 Company's Orientation and Human Resource Management Evolution in Technological Startup Companies
Authors: Yael Livneh, Shay Tzafrir, Ilan Meshoulam
Abstract:
Technological startup companies have been recognized as bearing tremendous potential for business and economic success. However, many entrepreneurs who produce promising innovative ideas fail to implement them as successful businesses. A key argument for such failure is the entrepreneurs' lack of competence in adaptation of the relevant level of formality of human resource management (HRM). The purpose of the present research was to examine multiple antecedents and consequences of HRM formality in growing startup companies. A review of the research literature identified two central components of HRM formality: HR control and professionalism. The effect of three contextual predictors was examined. The first was an intra-organizational factor: the development level of the organization. We based on a differentiation between knowledge exploration and knowledge exploitation. At a given time, the organization chooses to focus on a specific mix of these orientations, a choice which requires an appropriate level of HRM formality, in order to efficiently overcome the challenges. It was hypothesized that the mix of orientations of knowledge exploration and knowledge exploitation would predict HRM formality. The second predictor was the personal characteristics the organization's leader. According the idea of blueprint effect of CEO's on HRM, it was hypothesized that the CEO's cognitive style would predict HRM formality. The third contextual predictor was an external organizational factor: the level of investor involvement. By using the agency theory, and based on Transaction Cost Economy, it was hypothesized that the level of investor involvement in general management and HRM would be positively related to the HRM formality. The effect of formality on trust was examined directly and indirectly by the mediation role of procedural justice. The research method included a time-lagged field study. In the first study, data was obtained using three questionnaires, each directed to a different source: CEO, HR position-holder and employees. 43 companies participated in this study. The second study was conducted approximately a year later. Data was recollected using three questionnaires by reapplying the same sample. 41 companies participated in the second study. The organizations samples included technological startup companies. Both studies included 884 respondents. The results indicated consistency between the two studies. HRM formality was predicted by the intra-organizational factor as well as the personal characteristics of the CEO, but not at all by the external organizational context. Specifically, the organizational orientations was the greatest contributor to both components of HRM formality. The cognitive style predicted formality to a lesser extent. The investor's involvement was found not to have any predictive effect on the HRM formality. The results indicated a positive contribution to trust in HRM, mainly via the mediation of procedural justice. This study contributed a new concept for technological startup company development by a mixture of organizational orientation. Practical implications indicated that the level of HRM formality should be matched to that of the company's development. This match should be challenged and adjusted periodically by referring to the organization orientation, relevant HR practices, and HR function characteristics. A relevant matching could enhance further trust and business success.Keywords: control, formality, human resource management, organizational development, professionalism, technological startup company
Procedia PDF Downloads 26482 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 10981 The Effect of Ionic Liquid Anion Type on the Properties of TiO2 Particles
Authors: Marta Paszkiewicz, Justyna Łuczak, Martyna Marchelek, Adriana Zaleska-Medynska
Abstract:
In recent years, photocatalytical processes have been intensively investigated for destruction of pollutants, hydrogen evolution, disinfection of water, air and surfaces, for the construction of self-cleaning materials (tiles, glass, fibres, etc.). Titanium dioxide (TiO2) is the most popular material used in heterogeneous photocatalysis due to its excellent properties, such as high stability, chemical inertness, non-toxicity and low cost. It is well known that morphology and microstructure of TiO2 significantly influence the photocatalytic activity. This characteristics as well as other physical and structural properties of photocatalysts, i.e., specific surface area or density of crystalline defects, could be controlled by preparation route. In this regard, TiO2 particles can be obtained by sol-gel, hydrothermal, sonochemical methods, chemical vapour deposition and alternatively, by ionothermal synthesis using ionic liquids (ILs). In the TiO2 particles synthesis ILs may play a role of a solvent, soft template, reagent, agent promoting reduction of the precursor or particles stabilizer during synthesis of inorganic materials. In this work, the effect of the ILs anion type on morphology and photoactivity of TiO2 is presented. The preparation of TiO2 microparticles with spherical structure was successfully achieved by solvothermal method, using tetra-tert-butyl orthotitatane (TBOT) as the precursor. The reaction process was assisted by an ionic liquids 1-butyl-3-methylimidazolium bromide [BMIM][Br], 1-butyl-3-methylimidazolium tetrafluoroborate [BMIM][BF4] and 1-butyl-3-methylimidazolium haxafluorophosphate [BMIM][PF6]. Various molar ratios of all ILs to TBOT (IL:TBOT) were chosen. For comparison, reference TiO2 was prepared using the same method without IL addition. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Brenauer-Emmett-Teller surface area (BET), NCHS analysis, and FTIR spectroscopy were used to characterize the surface properties of the samples. The photocatalytic activity was investigated by means of phenol photodegradation in the aqueous phase as a model pollutant, as well as formation of hydroxyl radicals based on detection of fluorescent product of coumarine hydroxylation. The analysis results showed that the TiO2 microspheres had spherical structure with the diameters ranging from 1 to 6 µm. The TEM micrographs gave a bright observation of the samples in which the particles were comprised of inter-aggregated crystals. It could be also observed that the IL-assisted TiO2 microspheres are not hollow, which provides additional information about possible formation mechanism. Application of the ILs results in rise of the photocatalytic activity as well as BET surface area of TiO2 as compared to pure TiO2. The results of the formation of 7-hydroxycoumarin indicated that the increased amount of ·OH produced at the surface of excited TiO2 for samples TiO2_ILs well correlated with more efficient degradation of phenol. NCHS analysis showed that ionic liquids remained on the TiO2 surface confirming structure directing role of that compounds.Keywords: heterogeneous photocatalysis, IL-assisted synthesis, ionic liquids, TiO2
Procedia PDF Downloads 267