Kamronbek Yusupov1, Md Rezanur Islam1, Insu Oh2, Mahdi Sahlabadi2, and Kangbin Yim2, 1Software Convergence, Soonchunhyang University, Asan-si, South Korea, 2Department of Information Security Engineering, Soonchunhyang University, Asan-si, South Korea
This paper outlines a mobile application architecture designed to aid users in managing stress and anxiety effectively in their daily lives. The application encompasses a range of features, including meditation, breathing exercises, and stress monitoring, to offer a comprehensive stress management tool. Beyond the technical aspects, the paper delves into ethical considerations related to user privacy and data security. The primary objective is to develop a user-friendly and impactful mobile application that equips individuals with better coping mechanisms for stress and anxiety.
Stress management; Anxiety; Mobile application; Meditation; Breathing exercises.
Gray Cox, Department of Computer College of the Atlantic, Bar Harbor, Maine, USA
This paper explores shortcomings in the conceptual framing of LLMs that characterizes them as nothing more than “autocomplete on steroids” (AOS). It first sketches the view and some key reasons for its appeal. It then argues the view overlooks ways the Attention function in GPT systems introduces features of emergent intentionality in LLM behavior because it tacitly frames the description with the mechanistic metaphor of efficient causality. A conceptual analysis of the functions of variable Attention in GPT reinforcement learning suggests Aristotelian categories of formal and final causality provide a better understanding of the kinds of pattern recognition found in LLMs and the ways their behaviors seem to exhibit evidence of design and purpose. A conceptual illustration is used to explain the neo-Aristotelian theory proposed. Then descriptions and analyses of a series of experiments with Claude 3 are used to explore empirical evidence for the comparative merits of that theory. The experiments demonstrate the LLM’s ability to engage in the production of texts in ways that exhibit formal and final causality that would be difficult to explain using mechanical conceptions of efficient causality that are implied by the “autocomplete on steroids” theory. The paper concludes with a brief review of the key findings, the limits of this study, and directions for future research that it suggests.
Autocompletion, Formal and Final Causality, Emergent Intentionality, Aristotelian theory of AI, Attention.
Rami Skaik, Leonardo Rossi, Tomaso Fontanini, and Andrea Prati, Department of Engineering and Architecture, University of Parma, Italy
Recent advancements in generative models have revolutionized the field of artificial intelligence, enabling the creation of highly-realistic and detailed images. In this study, we propose a novel Mask Conditional Text-to-Image Generative Model (MCGM) that leverages the power of conditional diffusion models to generate pictures with specific poses. Our model builds upon the success of the Break-a-scene [1] model in generating new scenes using a single image with multiple subjects and incorporates a mask embedding injection that allows the conditioning of the generation process. By introducing this additional level of control, MCGM offers a flexible and intuitive approach for generating specific poses for one or more subjects learned from a single image, empowering users to influence the output based on their requirements. Through extensive experimentation and evaluation, we demonstrate the effectiveness of our proposed model in generating high-quality images that meet predefined mask conditions and improving the current Break-a-scene generative model.
Fine-tuning, Diffusion-models, Generative-models, Mask Condition.
Iftakhar Ali Khandokar, Tanvina Khondokar, Saiful Islam, Tasmia Ishrat, Alam Chadni, and Priya Deshpande, PhD Department of Electrical and Computer Engineering, Marquette University
Sentiment Analysis is one of the fascinating branches of text analysis, where a body of text or document is labeled according to the emotion it conveys. To elaborate the problem domain more specifically the target text document might be labeled according to the type of emotion like positive or negative and sometimes neutral. The assignment of positive or negative sentiment is solely based on the context of the problem domain and it is subjective to its own root. In this work, we have aimed to achieve a similar purpose, but the target data we are focusing on is financial data. In the finance domain there is no personal entity comment or review that sentiment should be analyzed rather than label the data tuple according to the inner events that might be positive or negative for the norm of the finance world. The target data is the regular Bangla textual news articles which are a great representation of the current financial situation. Therefore we have attempted to classify the sentiment of financial news data using several feature models and also used unlabelled news data to enhance the performance of these models. Finally using the most optimal model we have also conducted some temporal analysis on 5-year time series news data.
Machine Learning, NLP, Text Analysis, Semi-Supervised Learning.
Ayalew Belay Habtie, Brett Hudson Matthews, David Myhre and Aman Aalam, My Oral Village, inc, Toronto, Canada
The advent of mobile money has transformed how people manage and conduct their financial transactions. However, conventional mobile money systems predominantly rely on text-based menus, posing significant challenges for illiterate and low-literate individuals in effectively using these services. In response, this study introduces a human-centered solution designed to closely mirror the familiar practices of handling cash among these user groups. Our solution comprises an interface layer, database layer, and digitalized currencies, allowing users to tap on irtual currencies or coins to perform various financial activities. Implemented within the Android environment, the solution includes tutorial videos to guide users in navigating and utilizing the application effectively. Our human-centered design approach for this Androidbased mobile money solution represents a significant advancement in enhancing financial empowerment for illiterate and low-literate individuals in Pakistan. By prioritizing user-friendly design principles and addressing the specific needs of these users, our application promotes greater financial inclusion and economic participation. This innovative solution not only bridges the gap between technological advancement and accessibility but also contributes to the socio-economic development of the country, fostering a more inclusive and equitable financial ecosystem.
Human-Centered Design, Digital Currency, Mobile Money, Financial Inclusion, Android Environment.
Charlie Jin Woo Park, Dr. Douglas Walker
The purpose of this literature review is to investigate the ef ects of e-cigarettes on oral health in young adults, highlighting the considerable risks associated with tobacco smoking, such as oral cancer and periodontitis. E-cigarettes first gained popularity in 2006 and are now used by over 40 million people worldwide; however, their impact on oral health and whether they have similar oral health risks as traditional tobacco products is not well understood. This study consolidates findings from basic science, microbiology, clinical research, and epidemiological studies to investigate potential oral health consequences of e-cigarette use. It also highlights the crucial role of dental professionals in educating patients and advocating for tobacco cessation, despite the challenges introduced by the novelty of e-cigarettes and existing research biases. For this study, a comprehensive literature search was conducted using PubMed to identify studies focused on tobacco smoking and vaping and its influence on dental health of young adults aged 20 to 30. Initially, 30,000 papers were found, and after filtering, 14 relevant studies were included in the review. The studies reviewed indicates a correlation between vaping and an increased risk of dental caries. It points out the urgent need for more comprehensive studies to understand the long-term ef ects of vaping on oral health. While traditional tobacco uses negative impact on oral health is well-documented, this review underscores that e-cigarettes also pose significant risks, highlighting the importance of ongoing research and education in the dental community to navigate the evolving landscape of tobacco use and its implications for oral health.
E-cigarettes, Tobacco products, Oral health, Young adults, Oral cancer, Periodontitis, Dental caries, Gum disease, Vaping, Smoking, Dental health, Nicotine, Dental professionals, Tobacco cessation.
Paula Cristina de Almeida Marques, Paulo Alexandre Teixeira Faria de Oliveira, Minho University, Braga, Portugal
In the face of growingly complex and frequent global crisis like COVID-19 pandemic, organizations need to go for a more coordinated approach in order to enhance its resilience while uplifting response capacity. The research evaluates the synergistic benefits of Intelligent Systems (i.e., Artificial Intelligence) and Machine Learning on strategic management tool -Balanced Scorecard, to contribute effectively towards preparing organizations for future challenges and crises specifically. Applying these methods in predicting demand, or even optimizing resources and taking decisions on real time has proved to be quite useful using AI & ML. These systems were used in nearly real-time during the COVID-19 pandemic to predict outbreaks, allocate scarce resources such as ventilators and intensive care unit (ICU) beds, help make rapid diagnoses under high pressure conditions. Unlike the BSC, these solutions exercise quick and efficient responses to external shock events; however they need to be accompanied by a long-term strategic framework like the BSC that ensures crises are managed in fast yet also coordinated approach with an organizations objective set. As the BSC comprises four perspectives (financial, customer, internal process and human resources) integration and alignment are promoted offering a comprehensive view of balanced management between short term business practices versus long-term sustainability. Through the application of case studies in health care and a comparison between AI/ML implementations with BSC this paper is able to advocate for merging both approaches as means of enhancing organizational resilience during time of crisis. It argues that combining such technologies with the BSC is indispensable for fostering sustainable, strategic-aligned responses and achieving long lasting organizational success.
Artificial Intelligence, Machine Learning, Balanced Scorecard, Crisis Management, Organisational Resilience.
Giulio Ramaccioni, Faculty of Law, University e-Campus, Novedrate, Italy
The topic addressed in the research concerns the right to erasure (or the right to be forgotten) and its practical application. This right has now taken on great relevance in the European landscape thanks to Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Therefore, the study will be characterized by: (i) the analysis of the legislative frame of reference, represented by art. 17 of Regulation; (ii) the verification of the law in action, conducted through the study of five cases resolved out of court, which will allow us to identify the concrete operational rules adopted for their resolution: 1) the case of the online article about the collapse of well-known Italian company; 2) the case of the online article about rigged tenders; 3) the case of the website of the political party; 4) the case of the Panama Papers ; 5) the case of the website of an Italian Region. In this way, it will be possible to identify the legal problem characterizing the three cases in order to identify the concrete operational rules adopted for their solution.
Right to erasure, Right to be forgotten, Privacy, Data protection, Human rights.
Abdeen Mustafa Omer, Energy Research Institute (Eri), Nottingham Ng74eu, Uniited Kingdom
Geothermal heat pumps (GSHPs), or direct expansion (DX) ground source heat pumps, are a highly efficient renewable energy technology, which uses the earth, groundwater or surface water as a heat source when operating in heating mode or as a heat sink when operating in a cooling mode. It is receiving increasing interest because of its potential to reduce primary energy consumption and thus reduce emissions of the greenhouse gases (GHGs). The main concept of this technology is that it utilises the lower temperature of the ground (approximately <32°C), which remains relatively stable throughout the year, to provide space heating, cooling and domestic hot water inside the building area. The main goal of this study is to stimulate the uptake of the GSHPs. Recent attempts to stimulate alternative energy sources for heating and cooling of buildings has emphasised the utilisation of the ambient energy from ground source and other renewable energy sources. The purpose of this study, however, is to examine the means of reduction of energy consumption in buildings, identify GSHPs as an environmental friendly technology able to provide efficient utilisation of energy in the buildings sector, promote using GSHPs applications as an optimum means of heating and cooling, and to present typical applications and recent advances of the DX GSHPs. The study highlighted the potential energy saving that could be achieved through the use of ground energy sources. It also focuses on the optimisation and improvement of the operation conditions of the heat cycle and performance of the DX GSHP. It is concluded that the direct expansion of the GSHP, combined with the ground heat exchanger in foundation piles and the seasonal thermal energy storage from solar thermal collectors, is extendable to more comprehensive applications.
Geothermal heat pumps, direct expansion, ground heat exchanger, heating and cooling.
Omar Ali1 and Ahmad Al-Ahmad2, 1Abdullah Al Salem University, Kuwait, 2Gulf University for Science and Technology, Kuwait
While the rapid growth of information technology (IT) adoption within organizations is evident, insufficient attention has been directed towards comprehensively addressing the associated challenges.The existing literature moreover lacks a comprehensive understanding of the whole IT adoption process which the authors address here by providing a systematic review of the challenges to technology adoptionbased on a total of 98 peer-reviewed articles from the business and management literature from 2018-2024. Accordingly, this review study broadens scholarly understanding of the importance of strategic IT agility and the need to keep pace with competitive information systems (ISs) and IT environments. The findings enhance understanding of the pre-change and post-change process of IT adoption, expanding knowledge on adoption success and organizational strategies for achieving IT strategic agility. Three key contributions include addressing the lack of comparative studies on IT adoption challenges, adopting a unified approach with an integrated research model, and emphasizing the importance of enhancing an organizations absorptive IT capacity for strategic agility. Future research is encouraged to explore micro and macro features of IT adoption.
Information Technology, Adoption,Challenges, Strategic Agility.
Chirag Seth1, Divya Naiken2, Keyan Lin2, 1Electrical and Computer Engineering, University of Waterloo, Canada, 2System Design Engineering, University of Waterloo, Canada
This research project addresses the challenge of accurately tracking eye movements during specific events by leveraging previous research. Given the rapid movements of human eyes, which can reach speeds of 300°/s, precise eye tracking typically requires expensive and high-speed cameras. Our primary objective is to locate the eye center position (x, y) using inputs from an event camera. Eye movement analysis has extensive applications in consumer electronics, especially in VR and AR product development. Therefore, our ultimate goal is to develop an interpretable and cost-effective algorithm using deep learning methods to predict human attention, thereby improving device comfort and enhancing overall user experience. // To achieve this goal, we explored various approaches, with the CNN LSTM model proving most effective, achieving approximately 81% accuracy. Additionally, we propose future work focusing on Layer-wise Relevance Propagation (LRP) to further enhance the model’s interpretability and predictive performance.
Shang Xinping and Wang Yi , Artificial Intelligence, Dongguan City University, Dongguan, Guangdong, China
With the emergence of Internet finance, the competition of banking industry is becoming more and more fierce. To gain a more accurate and comprehensive insight into customer needs and improve customer loyalty, it is necessary to establish a customer loss analysis model to obtain customers who are about to lose, so that banks can make business decisions, retain relevant users and ensure that the banks benefits are not affected. Under this background, this paper establishes a customer churn prediction model by using ensemble learning algorithm. The experimental data show that the model can predict and analyze the loss of bank customers.
customer churn, data preprocessing, XGBoost.
Binu C. T., Dr. S. Saravana Kumar, Dr. Rubini P, School of Engineering & Technology CMR University, Bengaluru, Karnataka, India
The use of multi-cloud environments for chemical plants and other critical infrastructures is a growing trend that poses a considerable security problem, especially in the areas of access control and continuity of operations during low bandwidth, offline or no internet connectivity situations. This paper focuses on the evaluation of the Gros and Kerry authentication mechanism, which is a two-user authentication scheme that is aimed at increasing security in high-risk areas. This method involves the use of time-sensitive, token-based passwords and the need for two users to authenticate at the same time, thus making it very secure against token interception and replay attacks. It is worth mentioning that the Gros and Kerry mechanism is designed to work independently from the Internet connection, and this means that the critical applications will always remain secure and available even in remote situations or emergencies when all the other security measures may fail. This research validates the effectiveness of the proposed system in a simulated chemical plant environment, and its high security and operational reliability make it a suitable solution for other high-security applications. Subsequent studies will attempt to incorporate this mechanism with other technologies, such as artificial intelligence and blockchain, to enhance its functionality. Load balancing is a functionality where the system finds the failure node and rectify it by balance measures.
multi-cloud environments, critical infrastructure, security, Gros and Kerry authentication, dual-user authentication, low bandwidth, offline functionality, no internet, token-based passwords, operational continuity, cybersecurity. Load balancing
Kobra Khanmohammadi1,3, Zakeya Namrud1,3, François Labrèche2, and Raphaël Khoury1, 1Département d’informatique et d’ingénierie Université du Quebec en Outaouais, Gatineau, Quebec, 2Secureworks, Atlanta, GA, 3All authors contributed equally
In recent years, there has been a noticeable increase in the number of publicly reported vulnerabilities, posing significant challenges for organizations striving to update their systems promptly. This underscores the critical need for prioritizing certain vulnerability fixes over others to mitigate the risk of cyberattacks. Unfortunately, the current methods available for assessing the exploitability impact of vulnerabilities have substantial shortcomings. In particular, they often rely on predictive calculations based on data that may not be readily available at the time a vulnerability is first reported. In this paper, we introduce an innovative exploitability prediction method that exclusively utilizes information available at the time of a vulnerability’s initial disclosure. Our approach demonstrates superior performance compared to the most widely used vulnerability prioritization algorithms in scenarios where data is subject to the aforementioned limitations.
Vulnerability prioritization, exploitability prediction, vulnerability assessment, CVE report analysis.
Husam Lahza1 and Badr Alsamani2, 1Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia, 2Badr Alsamani, Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia
In an era where digital authentication is paramount, the prevalence of weak passwords poses significant cybersecurity risks. This study explores the integration of persuasive technology, specifically the Fogg Behavioral Model (FBM), to enhance password creation practices. By examining users motivations, abilities, and triggers, we establish a design principle to influence stronger password choices. Our proposed solution addresses gaps in traditional password strength meters by incorporating fear-based motivational messages and clear, concise instructions. This research contributes to both the theoretical understanding and practical implementation of sustainable password practices, aiming to reduce the practice of selecting weak passwords.
Password Security, Persuasive Technology, Fogg Behavioral Model, Cybersecurity, Password Strength, Password Meters .
Luis Alejandro Vargas Oviedo, Edwar Jacinto Gómez and Fernando Martínez Santa, Facultad de Ingeniería, Universidad Distrital Francisco Jose de Caldas, Bogotá, Colombia
In the last decades, information security has become a priority with exponential growth since technological developments must be supported so that their operation has reliable levels to store relevant data, and the levels of reliability are those required by the current environment. This is key to developing modern requirements that apply to data protection. It should be noted that the most common tool to encrypt information in code is AES (Advanced Encryption Standard), which is a block cipher algorithm that assigns binary keys and performs rounds of exchange to hide the message; on the other hand, is RSA (Rivest - Shamir - Adleman) which is an algorithm used for key assignment. Finally, the HASH key extension function is used for cryptographic analysis to verify the authenticity and origin of the data. In this research, a source code that combines the AES, RSA, and HASH encryption algorithms is designed to run on two hybrid acceleration units to compare the data processing in terms of time and reliability.
Information Security, Key Extension, System Embedded, Mixed Encryption Scheme, Hybrid Acceleration Unit. .
Divyam Sharma1 and Divya Santhanam2
Writing Stories is an Engaging Yet Challenging Endeavor. Often, Authors Encounter Moments of Creative Block, Where the Path Forward in Their Narrative Becomes Obscured. This Paper is Designed to Address Such Moments by Providing an Innovative Solution: a Tool That Completes Stories Based on Given Prompts. By Inputting a Short Story Prompt, Users Can Receive a Conclusion to Their Story, Articulated in One Sentence or More, Thereby Enhancing the Storytelling Process With Ai-driven Creativity.this Tool Aims Not Only to Assist Authors in Navigating Writer’s Block but Also to Offer a Fun and Interactive Way for Anyone to Expand on Story Ideas Spontaneously. Through This Paper, We Explore the Intersection of Artificial Intelligence and Creative Writing, Pushing the Boundaries of How Stories Can Be Crafted and Concluded. To Create Our Final Textgeneration Models, We Used a Pre-trained Gpt-3.5 Model and a Newly Created Finetuned Ssmmamba Model, Both of Which Perform Well on a Comprehensive List of Metrics Including Bert Score, Meteor, Bleu, Rouge, and Perplexity. The Ssm Model Has Also Been Made Public for the Nlp Community on Huggingface Models as an Open Source Contribution, Which for the Timebeing is a First of Its Kind State-space Model for Story-generation Task on Huggingface.
Story Ending Generation, Zero-Shot Learning, State Space Models, LoRa, PEFT, LLM, Creative Writing.
Limin Ma1, Ken Pu1 , and Ying Zhu2 Wesley Taylor3, 1Faculty of Science, Ontario Tech University, 2Faculty of Business and IT, Ontario Tech University, 3Legion Development Group
This study presents a comparative analysis of the a complex SQL benchmark, TPC-DS, with two existing text-to-SQL benchmarks, BIRD and Spider. Our findings reveal that TPC-DS queries exhibit a significantly higher level of structural complexity compared to the other two benchmarks. This underscores the need for more intricate benchmarks to simulate realistic scenarios effectively. To facilitate this comparison, we devised several measures of structural complexity and applied them across all three benchmarks. The results of this study can guide future research in the development of more sophisticated text-to-SQL benchmarks. We utilized 11 distinct Language Models (LLMs) to generate SQL queries based on the query descriptions provided by the TPC-DS benchmark. The prompt engineering process incorporated both the query description as outlined in the TPC-DS specification and the database schema of TPC-DS. Our findings indicate that the current state-of-the-art generative AI models fall short in generating accurate decision-making queries. We conducted a comparison of the generated queries with the TPC-DS gold standard queries using a series of fuzzy structure matching techniques based on query features. The results demonstrated that the accuracy of the generated queries is insufficient for practical real-world application.
Pronab Pal, KEYBYTE SYSTEMS, Melbourne, Australia
In today’s cloud-native multi-core environments, the deployment and runtime performance of applications often take precedence over the visibility and agility of business logic. While this prioritization ensures optimal responsiveness, it can inadvertently create barriers to real-time analysis and rapid business adaptability. Traditional approaches necessitate reproducing issues in development environments and implementing additional tracing mechanisms, leading to delays in problem resolution and business evolution. This paper introduces the Prompt and Response (PnR) computing model. This paradigm shift addresses these challenges by maintaining a clear representation of intention flow and object data throughout the application lifecycle. The PnR system enables real-time analysis and intelligence derivation in production environments, transcending the limitations of container boundaries and module isolation. By representing every input and result of each module associated with distinct intentions within the PnR framework, we create a unified and traceable computational space called Intention Space. This approach allows for precise identification and analysis of specific modules referred to as just ’Design Chunks’, regardless of their distribution across single or multiple containers or boundaries. We explore the architectural patterns of PnR transformations, illustrating how this model aligns with and extends current computing paradigms while offering a more flexible and transparent approach to managing complex, distributed systems. This paper aims to provide a computational foundation for implementing PnR systems, paving the way for more adaptable, analysable, and efficient cloud-native applications.
Prompt and Response, Designchunk, Response, Designchunk, Intentions, Objects, Cross Container Consistency, Input process identification, Output process Identification, Execution State, Identification, Execution State, Common Path Of Execution and Understanding, Intention Loop, Spaceloop, Intention Emission-Reflection-Absorbtion
Tashwin SJ and Preethi P and Mamatha HR, Department of Computer Engineering, PES University, Bengaluru.
Named entities are dynamic, as the contexts shift in a language, new entities emerge, reflecting the change over time, but large language models have a fixed vocabulary making it difficult to process out-ofvocabulary words(OOV), there are deep learning methods like BiLSTM, attention mechanism which can be used to get great results for Named entity recognition, but in case of low resource languages like Kannada, data available is not sufficient to train these deep learning models, so the research work attempts to solve the problem of NER using Aggregated Conditional Random Fields. The training is done on ai4bharat/ naamapadam dataset which is the largest publicly available dataset for 11 major Indian languages including Kannada. The tokenization is done using a transformer- based multilingual model xlm-robertabase. In this experiment, the OOV words are split into multiple sub-tokens and then aggregated using an additive pooling layer, and then passed on to the CRF classifier. Results show that the additive pooling model performs better than the base model benchmark and also better than vanilla CRF. Our model was able to produce an f1 score of 78.97 containing around 5728 OOV words in test and 14545 OOV words in the validation set.
CRF, NER, LLM, transformer.
Epilogue Jedishkem1, Jesfaith Jedishkem2, 1Ngwani College, Eswatini, 2Freelance Artist Researcher, Eswatini.
College students are noted for not reading, researchers sort AI as a tool of motivation; in the process of meeting this need, another interference surfaced, English, a linguistic barrier. Multi-cultural differences are barriers to literacy [1], similarly, AI natural language processing (NLP) and prompts necessitate command over the English language, a bone in the throat to the deprived. Drawing on these, the study sought to discover how AI educational tools can aid in acquiring knowledge in higher education. Empirically, the soul of research is discovery, to understand why college students do not read and what motivating factors can be considered, exploratory research was espoused. The study collectively covered 162 first-year college students. AI was purposefully identified as a motivational tool. Students were optimistic about AI integration into education, findings highlight advanced and proficient students having confident outcomes in interacting with AI whereas the inept had challenges.
Artificial intelligence, Higher education, language barrier, NLP .
Arnaud Lucas, Wanderu, United States of America
This document explores the concept of "domain ownership" in a fast-paced technological environment. Effective domain ownership is a continuous learning journey: Define a Domain: Establish clear boundaries and understand your domains purpose, just like understanding the ingredients for a perfect cake. Gain Visibility and Ensure Quality: Build a solid foundation by gaining insights into a domains performance and addressing quality issues. Think of it as addressing the basic needs in Maslows Hierarchy for domains. Chart Your Course: Create a roadmap for the future with a compelling vision, aligned strategies, and well-defined milestones. Imagine driving with a clear destination and GPS guidance. Execute with a Backlog: Translate your plans into action by building a prioritized backlog for delivering value. Ownership fosters a sense of accomplishment, leadership, and flexibility, but requires strong expertise and leadership to be effective as domain owners effectively advocate for their domains, influencing stakeholders and fostering collaboration across teams.
Domain Ownership, Domain-Driven Design (DDD), Tech Leadership, Team Ownership, Organizational Agility.
Marcelo S. Alencar, Institue of Advanced Studies in Communications (Iecom) Federal University of Rio Grande do Norte Natal, Brazil
This article presents a mathematical model for the effect of interference caused by a sudden increase in the number of users that access a QAM digital cellular communication system. As demonstrated, the rapid increase in the number of users that access the system causes a nonstationary traffic in the network. A stochastic differential approach is used to model the epidemic interference effect.
Wireless communications, QAM modulation, stochastic differentiation; interference analysis.
Mohamad Al-Samhouri1, M. Abur-rous2 and N. Novas Castellano3, 1Department of Computer Science, University of Almeria, CIAMBITAL, CEIA3,04120, Almeria, Spain, 2Zayed University, College of Technological Innovation, Abu Dhabi, UAE, 19282, 3Department of Engineering, University of Almeria, CIAMBITAL, CEIA3, 04120, Almeria, Spain.
The pervasive Internet of Things (IoT) integration has ushered in various services across diverse platforms, rendering generated data an invaluable asset for Digital Forensics(DF). However, investigating IoT environments presents escalating challenges for DF investigators, owing to the intricate and varied structure of infrastructure. This complexity underscores the need for an innovative methodology to ensure chronological verification of related pieces of evidence. Responding to this, the paper delves into the synergistic application of Fog Computing(FC) and Blockchain (BC) to augment the security of digital evidence forensics. The exploration encompasses crucial technologies, including DF, FC, Distributed Ledger Technology (DLT), and Chain of Custody(CoC). Significantly, the study transcends conventional forensics, empowering investigators with access to more dependable sources of evidence. It facilitates root cause identification evidence, enabling proactive prevention of future attacks. The evaluation of FC and IoT forensics based on BC specifically centres on upholding the integrity and authenticity of diverse digital evidence through CoC procedures. A key highlight of this study is the capacity to go beyond traditional forensics, enabling investigators to obtain more reliable sources of evidence and assisting in identifying the root cause of attacks to be prevented through the harnessing of quantum computing and data mining. Therefore, this study a new innovative framework model was developed by this survey analysis study which investigates the current state of FC and IoT forensics based on BC for securing IoT digital forensics evidence and tracking the integrity and authenticity of the various digital evidence by CoCprocedure.
Internet of Things (IoT), Digital Forensics (DF), Fog Computing (FC), Blockchain (BC),Distributed Ledger Technology (DLT), Chain of Custody (CoC).
Khalil Ullah1,, Muhammad Naeem Ul Hassan2, 1,Department of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China, 2Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming, 650500, China.
The Earths temperature is rising due to the occurrence of more extreme heat waves, the melting of ice, and an overall increase in average temperatures. The evidence of the consequences of global climate change is growing, resulting in climate change being designated as the "most significant global health threat of the twenty-first century." Consequently, the global community has formulated a comprehensive strategy aimed at achieving environmental sustainability, fostering human progress, and safeguarding the biosphere through the adoption of the Paris Agreement and the Sustainable Development Goals. Consequently, there has been a growing emphasis on international collaboration endeavors. This study outlines potential benefits of integrating the Internet of Things (IoT) into ongoing climate change mitigation initiatives. This paper examines contemporary initiatives implemented globally to leverage recent breakthroughs in 5G/6G technology and the Internet of Things (IoT) for the purpose of monitoring, modeling, and mitigating the consequences of global climate change on agricultural practices, water resource depletion, and coastline erosion.
5G/6G, climate change, IoT, sustainability, water resources.
Copyright © WiMo 2024